[GitHub] [hbase] Apache-HBase commented on pull request #3372: HBASE-25986 set default value of normalization enabled from hbase site
Apache-HBase commented on pull request #3372: URL: https://github.com/apache/hbase/pull/3372#issuecomment-858332090 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 3s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 13s | master passed | | +1 :green_heart: | compile | 5m 2s | master passed | | +1 :green_heart: | checkstyle | 2m 0s | master passed | | +0 :ok: | refguide | 3m 44s | branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | | +1 :green_heart: | spotbugs | 3m 56s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 2s | the patch passed | | +1 :green_heart: | compile | 5m 3s | the patch passed | | +1 :green_heart: | javac | 5m 3s | the patch passed | | +1 :green_heart: | checkstyle | 0m 23s | The patch passed checkstyle in hbase-common | | +1 :green_heart: | checkstyle | 0m 28s | hbase-client: The patch generated 0 new + 29 unchanged - 1 fixed = 29 total (was 30) | | +1 :green_heart: | checkstyle | 1m 10s | The patch passed checkstyle in hbase-server | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +0 :ok: | refguide | 3m 38s | patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | | +1 :green_heart: | hadoopcheck | 19m 55s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 4m 31s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 69m 8s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3372/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3372 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile refguide xml | | uname | Linux a7501911cff8 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 329f0baa98 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | refguide | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3372/1/artifact/yetus-general-check/output/branch-site/book.html | | refguide | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3372/1/artifact/yetus-general-check/output/patch-site/book.html | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-common hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3372/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on a change in pull request #3371: HBASE-25984: Avoid premature reuse of sync futures in FSHLog [DRAFT]
saintstack commented on a change in pull request #3371: URL: https://github.com/apache/hbase/pull/3371#discussion_r648866811 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java ## @@ -263,6 +263,14 @@ public AsyncFSWAL(FileSystem fs, Abortable abortable, Path rootDir, String logDi DEFAULT_ASYNC_WAL_WAIT_ON_SHUTDOWN_IN_SECONDS); } + /** + * Helper that marks the future as DONE and offers it back to the cache. + */ + private void markFutureDoneAndOffer(SyncFuture future, long txid, Throwable t) { +future.done(txid, t); +syncFutureCache.offer(future); Review comment: Where is the future overwrite here? The call to 'done'? ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SyncFutureCache.java ## @@ -0,0 +1,80 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver.wal; + +import java.util.concurrent.TimeUnit; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.HConstants; +import org.apache.yetus.audience.InterfaceAudience; +import org.apache.hbase.thirdparty.com.google.common.cache.Cache; +import org.apache.hbase.thirdparty.com.google.common.cache.CacheBuilder; + +/** + * A cache of {@link SyncFuture}s. This class supports two methods + * {@link SyncFutureCache#getIfPresentOrNew()} and {@link SyncFutureCache#offer()}. + * + * Usage pattern: + * SyncFuture sf = syncFutureCache.getIfPresentOrNew(); + * sf.reset(...); + * // Use the sync future + * finally: syncFutureCache.offer(sf); + * + * Offering the sync future back to the cache makes it eligible for reuse within the same thread + * context. Cache keyed by the accessing thread instance and automatically invalidated if it remains + * unused for {@link SyncFutureCache#SYNC_FUTURE_INVALIDATION_TIMEOUT_MINS} minutes. + */ +@InterfaceAudience.Private +public final class SyncFutureCache { + + private final long SYNC_FUTURE_INVALIDATION_TIMEOUT_MINS = 2; + + private final Cache syncFutureCache; + + public SyncFutureCache(final Configuration conf) { +final int handlerCount = conf.getInt(HConstants.REGION_SERVER_HANDLER_COUNT, +HConstants.DEFAULT_REGION_SERVER_HANDLER_COUNT); +syncFutureCache = CacheBuilder.newBuilder().initialCapacity(handlerCount) Review comment: I thought this guava cache 'slow'. That Ben Manes tried to get a patch to guava that improved it but couldn't get interest so his caffeine cache has means of implementing gauava cache api so you can drop in his thing instead. Maybe it doesn't matter here because scale of objects is small? It is critical section though. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25596) Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated data due to EOFException from WAL
[ https://issues.apache.org/jira/browse/HBASE-25596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360571#comment-17360571 ] Sandeep Pal commented on HBASE-25596: - I will try to write a UT for this. >>To be more clear, if we hit EOFException, it means that the WAL file is >>empty, so we are safe to just skip this file without shipping the 'existing' >>batch I believe there can be entries for multiple WAL files in an existing batch, I am referring to branch-1 code. We only break the batch when we hit some thresholds as in the below code. {code:java} if (totalBufferTooLarge || batch.getHeapSize() >= replicationBatchSizeCapacity || batch.getNbEntries() >= replicationBatchCountCapacity) { break; } {code} > Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated > data due to EOFException from WAL > --- > > Key: HBASE-25596 > URL: https://issues.apache.org/jira/browse/HBASE-25596 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Pal >Assignee: Sandeep Pal >Priority: Critical > Fix For: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.4.2 > > > There seems to be a major issue with how we handle the EOF exception from > WALEntryStream. > Problem: > When we see EOFException, we try to handle it and remove it from the log > queue, but we never try to ship the existing batch of entries. *This is a > permanent data loss in replication.* > > Secondly, we do not stop the reader on encountering the EOFException and thus > if EOFException was on the last WAL, we still try to process the WALEntry > stream and ship the empty batch with lastWALPath set to null. This is the > reason of NPE as below which *crash* the region server. > {code:java} > 2021-02-16 15:33:21,293 ERROR [,60020,1613262147968] > regionserver.ReplicationSource - Unexpected exception in > ReplicationSourceWorkerThread, > currentPath=nulljava.lang.NullPointerExceptionat > org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:193)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:831)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:746)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:650)2021-02-16 > 15:33:21,294 INFO [,60020,1613262147968] regionserver.HRegionServer - > STOPPED: Unexpected exception in ReplicationSourceWorkerThread > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25391) Flush directly into data directory, skip rename when committing flush
[ https://issues.apache.org/jira/browse/HBASE-25391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360560#comment-17360560 ] Michael Stack commented on HBASE-25391: --- Yeah, what [~anoop.hbase] said. Trying to figure how this relates to the parent issue and design. It looks like a subtask... Needs a bunch of other work to happen before it is made use of (just trying to follow-along [~wchevreuil] – thanks). > Flush directly into data directory, skip rename when committing flush > - > > Key: HBASE-25391 > URL: https://issues.apache.org/jira/browse/HBASE-25391 > Project: HBase > Issue Type: Sub-task >Reporter: Tak-Lon (Stephen) Wu >Assignee: Wellington Chevreuil >Priority: Major > > {color:#00}When flushing memstore snapshot to HFile, we write it directly > to the data directory.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25987) Make SSL keystore type configurable for HBase ThriftServer
[ https://issues.apache.org/jira/browse/HBASE-25987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360558#comment-17360558 ] Hudson commented on HBASE-25987: Results for branch branch-2.4 [build #139 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/139/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/139/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/139/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/139/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/139/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Make SSL keystore type configurable for HBase ThriftServer > -- > > Key: HBASE-25987 > URL: https://issues.apache.org/jira/browse/HBASE-25987 > Project: HBase > Issue Type: Improvement > Components: Thrift >Affects Versions: 3.0.0-alpha-1, 2.2.7, 2.5.0, 2.3.5, 2.4.4 >Reporter: Mate Szalay-Beko >Assignee: Mate Szalay-Beko >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > > We get the following exception, when trying to start Hbase Thrift Server in > http mode (hbase.regionserver.thrift.http=true) and use non default (not > "jks") keystore type: > > {noformat} > 2021-06-08 07:40:10,275 ERROR org.apache.hadoop.hbase.thrift.ThriftServer: > Cannot run ThriftServer > java.io.IOException: Invalid keystore format > at > sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:663) > at > sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56) > at > sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224) > at > sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70) > at java.security.KeyStore.load(KeyStore.java:1445) > at > org.eclipse.jetty.util.security.CertificateUtils.getKeyStore(CertificateUtils.java:54) > at > org.eclipse.jetty.util.ssl.SslContextFactory.loadKeyStore(SslContextFactory.java:1197) > at > org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:321) > at > org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:243) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) > at > org.eclipse.jetty.server.SslConnectionFactory.doStart(SslConnectionFactory.java:97) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) > at > org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321) > at > org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81) > at > org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at org.eclipse.jetty.server.Server.doStart(Server.java:401) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.apache.hadoop.hbase.thrift.ThriftServer$2.run(ThriftServer.java:861) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) > at > org.apache.hadoop.hbase.thrift.ThriftServer.run(ThriftServer.java:855) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at > org.a
[jira] [Commented] (HBASE-25391) Flush directly into data directory, skip rename when committing flush
[ https://issues.apache.org/jira/browse/HBASE-25391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360553#comment-17360553 ] Anoop Sam John commented on HBASE-25391: Can we have a Release Notes pls? So this will be the default way? Or what configs needed to go to this mode of flush? Applicable for writes with Compactions? sorry did not see the PR so asking here. Would be good to explain such in RN > Flush directly into data directory, skip rename when committing flush > - > > Key: HBASE-25391 > URL: https://issues.apache.org/jira/browse/HBASE-25391 > Project: HBase > Issue Type: Sub-task >Reporter: Tak-Lon (Stephen) Wu >Assignee: Wellington Chevreuil >Priority: Major > > {color:#00}When flushing memstore snapshot to HFile, we write it directly > to the data directory.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] mnpoonia opened a new pull request #3372: HBASE-25986 set default value of normalization enabled from hbase site
mnpoonia opened a new pull request #3372: URL: https://github.com/apache/hbase/pull/3372 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25596) Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated data due to EOFException from WAL
[ https://issues.apache.org/jira/browse/HBASE-25596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360531#comment-17360531 ] Duo Zhang commented on HBASE-25596: --- To be more clear, if we hit EOFException, it means that the WAL file is empty, so we are safe to just skip this file without shipping the 'existing' batch, as we can make sure that the batch is null. Thanks. > Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated > data due to EOFException from WAL > --- > > Key: HBASE-25596 > URL: https://issues.apache.org/jira/browse/HBASE-25596 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Pal >Assignee: Sandeep Pal >Priority: Critical > Fix For: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.4.2 > > > There seems to be a major issue with how we handle the EOF exception from > WALEntryStream. > Problem: > When we see EOFException, we try to handle it and remove it from the log > queue, but we never try to ship the existing batch of entries. *This is a > permanent data loss in replication.* > > Secondly, we do not stop the reader on encountering the EOFException and thus > if EOFException was on the last WAL, we still try to process the WALEntry > stream and ship the empty batch with lastWALPath set to null. This is the > reason of NPE as below which *crash* the region server. > {code:java} > 2021-02-16 15:33:21,293 ERROR [,60020,1613262147968] > regionserver.ReplicationSource - Unexpected exception in > ReplicationSourceWorkerThread, > currentPath=nulljava.lang.NullPointerExceptionat > org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:193)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:831)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:746)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:650)2021-02-16 > 15:33:21,294 INFO [,60020,1613262147968] regionserver.HRegionServer - > STOPPED: Unexpected exception in ReplicationSourceWorkerThread > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25596) Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated data due to EOFException from WAL
[ https://issues.apache.org/jira/browse/HBASE-25596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360530#comment-17360530 ] Duo Zhang commented on HBASE-25596: --- Have you seen what I said above? In ProtobufLogReader.readNext, we will never throw EOFException to upper layer, so the problem you described here should not happen. That's why I ask whether you have really meet data loss in production. Please try to provide a UT to acutally reproduce the data loss? If this is not possible, I tend to revert the changes here to make the code simplier. Thanks. > Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated > data due to EOFException from WAL > --- > > Key: HBASE-25596 > URL: https://issues.apache.org/jira/browse/HBASE-25596 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Pal >Assignee: Sandeep Pal >Priority: Critical > Fix For: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.4.2 > > > There seems to be a major issue with how we handle the EOF exception from > WALEntryStream. > Problem: > When we see EOFException, we try to handle it and remove it from the log > queue, but we never try to ship the existing batch of entries. *This is a > permanent data loss in replication.* > > Secondly, we do not stop the reader on encountering the EOFException and thus > if EOFException was on the last WAL, we still try to process the WALEntry > stream and ship the empty batch with lastWALPath set to null. This is the > reason of NPE as below which *crash* the region server. > {code:java} > 2021-02-16 15:33:21,293 ERROR [,60020,1613262147968] > regionserver.ReplicationSource - Unexpected exception in > ReplicationSourceWorkerThread, > currentPath=nulljava.lang.NullPointerExceptionat > org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:193)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:831)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:746)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:650)2021-02-16 > 15:33:21,294 INFO [,60020,1613262147968] regionserver.HRegionServer - > STOPPED: Unexpected exception in ReplicationSourceWorkerThread > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25596) Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated data due to EOFException from WAL
[ https://issues.apache.org/jira/browse/HBASE-25596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360525#comment-17360525 ] Sandeep Pal commented on HBASE-25596: - [~zhangduo] This is where I think we will not replicate. {code:java} while (hasNext) { Entry entry = entryStream.next(); <---we hit an exception here entry = filterEntry(entry); if (entry != null) { WALEdit edit = entry.getEdit(); if (edit != null && !edit.isEmpty()) { long entrySize = getEntrySizeIncludeBulkLoad(entry); long entrySizeExcludeBulkLoad = getEntrySizeExcludeBulkLoad(entry); batch.addEntry(entry, entrySize);= replicationBatchSizeCapacity || batch.getNbEntries() >= replicationBatchCountCapacity) { break; } } } hasNext = entryStream.hasNext(); {code} While reading wals we add entries in batch but in between we hit an exception let's say in the next empty WAL file. We won't replicate the existing batch which might have entries from the previous wal file. I am referring to branch-1 code [here|https://github.com/apache/hbase/blob/branch-1/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReaderThread.java#L165]. > Fix NPE in ReplicationSourceManager as well as avoid permanently unreplicated > data due to EOFException from WAL > --- > > Key: HBASE-25596 > URL: https://issues.apache.org/jira/browse/HBASE-25596 > Project: HBase > Issue Type: Bug >Reporter: Sandeep Pal >Assignee: Sandeep Pal >Priority: Critical > Fix For: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.4.2 > > > There seems to be a major issue with how we handle the EOF exception from > WALEntryStream. > Problem: > When we see EOFException, we try to handle it and remove it from the log > queue, but we never try to ship the existing batch of entries. *This is a > permanent data loss in replication.* > > Secondly, we do not stop the reader on encountering the EOFException and thus > if EOFException was on the last WAL, we still try to process the WALEntry > stream and ship the empty batch with lastWALPath set to null. This is the > reason of NPE as below which *crash* the region server. > {code:java} > 2021-02-16 15:33:21,293 ERROR [,60020,1613262147968] > regionserver.ReplicationSource - Unexpected exception in > ReplicationSourceWorkerThread, > currentPath=nulljava.lang.NullPointerExceptionat > org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:193)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:831)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:746)at > > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:650)2021-02-16 > 15:33:21,294 INFO [,60020,1613262147968] regionserver.HRegionServer - > STOPPED: Unexpected exception in ReplicationSourceWorkerThread > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25967) The readRequestsCount does not calculate when the outResults is empty
[ https://issues.apache.org/jira/browse/HBASE-25967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Wang resolved HBASE-25967. Resolution: Fixed > The readRequestsCount does not calculate when the outResults is empty > - > > Key: HBASE-25967 > URL: https://issues.apache.org/jira/browse/HBASE-25967 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 2.5.0, 2.3.6, 3.0.0-alpha-2, 2.4.5 > > > This metric is about request, so should not depend on the result. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25967) The readRequestsCount does not calculate when the outResults is empty
[ https://issues.apache.org/jira/browse/HBASE-25967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Wang updated HBASE-25967: --- Fix Version/s: 2.4.5 3.0.0-alpha-2 2.3.6 2.5.0 > The readRequestsCount does not calculate when the outResults is empty > - > > Key: HBASE-25967 > URL: https://issues.apache.org/jira/browse/HBASE-25967 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > Fix For: 2.5.0, 2.3.6, 3.0.0-alpha-2, 2.4.5 > > > This metric is about request, so should not depend on the result. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25967) The readRequestsCount does not calculate when the outResults is empty
[ https://issues.apache.org/jira/browse/HBASE-25967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360520#comment-17360520 ] Zheng Wang commented on HBASE-25967: Pushed to 2.3+ > The readRequestsCount does not calculate when the outResults is empty > - > > Key: HBASE-25967 > URL: https://issues.apache.org/jira/browse/HBASE-25967 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Zheng Wang >Assignee: Zheng Wang >Priority: Major > > This metric is about request, so should not depend on the result. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25981) JVM crash when displaying regionserver UI
[ https://issues.apache.org/jira/browse/HBASE-25981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360516#comment-17360516 ] Hudson commented on HBASE-25981: Results for branch branch-2 [build #273 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/273/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/273/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/273/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/273/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/273/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > JVM crash when displaying regionserver UI > - > > Key: HBASE-25981 > URL: https://issues.apache.org/jira/browse/HBASE-25981 > Project: HBase > Issue Type: Bug > Components: rpc, UI >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: hs_err_pid116190.log-gha-data-hbase-cat0085-ui > > > The MonitoredRPCHandlerImpl refers to the params of a request, and will show > them when we call 'toJson()'. But the running RPC call may be cleaned up and > the ByteBuffer be released before the displaying in UI. We need to let the > life cycle of RPC status monitor be inner the life cycle of RPC. > {code:java} > J 19267 C2 > org.apache.hbase.thirdparty.com.google.protobuf.TextFormat$Printer.printMessage(Lorg/apache/hbase/thirdparty/com/google/protobuf/MessageOrBuilder;Lorg/apache/hbase/thirdparty/com/google/protobuf/TextFormat$TextGenerator;)V > (73 bytes) @ 0x7f1ac7e54640 [0x7f1ac7e53f60+0x6e0] > J 20932 C2 > org.apache.hbase.thirdparty.com.google.protobuf.TextFormat$Printer.print(Lorg/apache/hbase/thirdparty/com/google/protobuf/MessageOrBuilder;Lorg/apache/hbase/thirdparty/com/google/protobuf/TextFormat$TextGenerator;)V > (34 bytes) @ 0x7f1ac68ab9b0 [0x7f1ac68ab880+0x130] > J 21843 C1 > org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessage.toString()Ljava/lang/String; > (8 bytes) @ 0x7f1ac620e14c [0x7f1ac620dba0+0x5ac] > J 21835 C1 > org.apache.hadoop.hbase.monitoring.MonitoredRPCHandlerImpl.toMap()Ljava/util/Map; > (240 bytes) @ 0x7f1ac5009bf4 [0x7f1ac50071c0+0x2a34] > J 21833 C1 > org.apache.hadoop.hbase.monitoring.MonitoredRPCHandlerImpl.toJSON()Ljava/lang/String; > (5 bytes) @ 0x7f1ac74efb74 [0x7f1ac74efaa0+0xd4] > j > org.apache.hadoop.hbase.tmpl.common.TaskMonitorTmplImpl.renderNoFlush(Ljava/io/Writer;)V+259 > j > org.apache.hadoop.hbase.tmpl.common.TaskMonitorTmpl.renderNoFlush(Ljava/io/Writer;)V+16 > j > org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmplImpl.renderNoFlush(Ljava/io/Writer;)V+129 > {code} > [^hs_err_pid116190.log-gha-data-hbase-cat0085-ui] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25987) Make SSL keystore type configurable for HBase ThriftServer
[ https://issues.apache.org/jira/browse/HBASE-25987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360517#comment-17360517 ] Hudson commented on HBASE-25987: Results for branch branch-2 [build #273 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/273/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/273/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/273/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/273/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/273/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Make SSL keystore type configurable for HBase ThriftServer > -- > > Key: HBASE-25987 > URL: https://issues.apache.org/jira/browse/HBASE-25987 > Project: HBase > Issue Type: Improvement > Components: Thrift >Affects Versions: 3.0.0-alpha-1, 2.2.7, 2.5.0, 2.3.5, 2.4.4 >Reporter: Mate Szalay-Beko >Assignee: Mate Szalay-Beko >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > > We get the following exception, when trying to start Hbase Thrift Server in > http mode (hbase.regionserver.thrift.http=true) and use non default (not > "jks") keystore type: > > {noformat} > 2021-06-08 07:40:10,275 ERROR org.apache.hadoop.hbase.thrift.ThriftServer: > Cannot run ThriftServer > java.io.IOException: Invalid keystore format > at > sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:663) > at > sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56) > at > sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224) > at > sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70) > at java.security.KeyStore.load(KeyStore.java:1445) > at > org.eclipse.jetty.util.security.CertificateUtils.getKeyStore(CertificateUtils.java:54) > at > org.eclipse.jetty.util.ssl.SslContextFactory.loadKeyStore(SslContextFactory.java:1197) > at > org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:321) > at > org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:243) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) > at > org.eclipse.jetty.server.SslConnectionFactory.doStart(SslConnectionFactory.java:97) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) > at > org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321) > at > org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81) > at > org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at org.eclipse.jetty.server.Server.doStart(Server.java:401) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.apache.hadoop.hbase.thrift.ThriftServer$2.run(ThriftServer.java:861) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) > at > org.apache.hadoop.hbase.thrift.ThriftServer.run(ThriftServer.java:855) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at > org.apache.hadoop.h
[jira] [Created] (HBASE-25991) do compaction on compaction server
Yulin Niu created HBASE-25991: - Summary: do compaction on compaction server Key: HBASE-25991 URL: https://issues.apache.org/jira/browse/HBASE-25991 Project: HBase Issue Type: Sub-task Reporter: Yulin Niu Assignee: Yulin Niu after HBASE-25968 , this patch implement code in compaction server -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] bsglz merged pull request #3351: HBASE-25967 The readRequestsCount does not calculate when the outResu…
bsglz merged pull request #3351: URL: https://github.com/apache/hbase/pull/3351 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] tomscut commented on pull request #3325: HBASE-25934 Add username for RegionScannerHolder
tomscut commented on pull request #3325: URL: https://github.com/apache/hbase/pull/3325#issuecomment-858211968 Hi @saintstack @Apache9 , could you please help me merge the code if there are no other problems. Thanks a lot. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3370: HBASE-25739 TableSkewCostFunction need to use aggregated deviation
Apache-HBase commented on pull request #3370: URL: https://github.com/apache/hbase/pull/3370#issuecomment-858211397 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 14s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 27s | master passed | | +1 :green_heart: | compile | 0m 22s | master passed | | +1 :green_heart: | shadedjars | 8m 56s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 19s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 32s | the patch passed | | +1 :green_heart: | compile | 0m 22s | the patch passed | | +1 :green_heart: | javac | 0m 22s | the patch passed | | +1 :green_heart: | shadedjars | 8m 33s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 19s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 28m 6s | hbase-balancer in the patch failed. | | | | 58m 11s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3370 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux e630f3e33ffb 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-balancer.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/3/testReport/ | | Max. process+thread count | 320 (vs. ulimit of 3) | | modules | C: hbase-balancer U: hbase-balancer | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3370: HBASE-25739 TableSkewCostFunction need to use aggregated deviation
Apache-HBase commented on pull request #3370: URL: https://github.com/apache/hbase/pull/3370#issuecomment-858207861 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 22s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 8s | master passed | | +1 :green_heart: | compile | 0m 20s | master passed | | +1 :green_heart: | shadedjars | 9m 20s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 15s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 29s | the patch passed | | +1 :green_heart: | compile | 0m 20s | the patch passed | | +1 :green_heart: | javac | 0m 20s | the patch passed | | +1 :green_heart: | shadedjars | 9m 16s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 17s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 17m 38s | hbase-balancer in the patch failed. | | | | 48m 25s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3370 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 63373ba8931a 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-balancer.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/3/testReport/ | | Max. process+thread count | 230 (vs. ulimit of 3) | | modules | C: hbase-balancer U: hbase-balancer | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3370: HBASE-25739 TableSkewCostFunction need to use aggregated deviation
Apache-HBase commented on pull request #3370: URL: https://github.com/apache/hbase/pull/3370#issuecomment-858203641 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 59s | master passed | | +1 :green_heart: | compile | 0m 35s | master passed | | +1 :green_heart: | checkstyle | 0m 17s | master passed | | +1 :green_heart: | spotbugs | 0m 37s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 38s | the patch passed | | +1 :green_heart: | compile | 0m 32s | the patch passed | | -0 :warning: | javac | 0m 32s | hbase-balancer generated 1 new + 17 unchanged - 0 fixed = 18 total (was 17) | | +1 :green_heart: | checkstyle | 0m 14s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 13s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 0m 45s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 14s | The patch does not generate ASF License warnings. | | | | 36m 58s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3370 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 916e349625ff 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/3/artifact/yetus-general-check/output/diff-compile-javac-hbase-balancer.txt | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-balancer U: hbase-balancer | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/3/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-25920) Support Hadoop 3.3.1
[ https://issues.apache.org/jira/browse/HBASE-25920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-25920. --- Hadoop Flags: Reviewed Release Note: Fixes to make unit tests pass and to make it so an hbase built from branch-2 against a 3.3.1RC can run on a 3.3.1RC small cluster. Resolution: Fixed Resolving as done at least for now. > Support Hadoop 3.3.1 > > > Key: HBASE-25920 > URL: https://issues.apache.org/jira/browse/HBASE-25920 > Project: HBase > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0 > > > The Hadoop 3.3.1 is a big release, quite different from 3.3.0. > File this jira to track the support for Hadoop 3.3.1. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25920) Support Hadoop 3.3.1
[ https://issues.apache.org/jira/browse/HBASE-25920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360460#comment-17360460 ] Michael Stack commented on HBASE-25920: --- So, w/ HBASE-25989 in place, I can't get TestLogRolling to fail locally. I think that's enough for this Jira then. We can column to refguide when we have an hbase release we want to call as supported on 3.3.1 – we don't have one such yet (2.5.0 will be the first? Or 2.4.5?) > Support Hadoop 3.3.1 > > > Key: HBASE-25920 > URL: https://issues.apache.org/jira/browse/HBASE-25920 > Project: HBase > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0 > > > The Hadoop 3.3.1 is a big release, quite different from 3.3.0. > File this jira to track the support for Hadoop 3.3.1. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3371: HBASE-25984: Avoid premature reuse of sync futures in FSHLog [DRAFT]
Apache-HBase commented on pull request #3371: URL: https://github.com/apache/hbase/pull/3371#issuecomment-858154207 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 4s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 19s | master passed | | +1 :green_heart: | compile | 3m 22s | master passed | | +1 :green_heart: | checkstyle | 1m 10s | master passed | | +1 :green_heart: | spotbugs | 2m 12s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 1s | the patch passed | | +1 :green_heart: | compile | 3m 21s | the patch passed | | +1 :green_heart: | javac | 3m 21s | the patch passed | | -0 :warning: | checkstyle | 1m 10s | hbase-server: The patch generated 2 new + 12 unchanged - 0 fixed = 14 total (was 12) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 19m 58s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | -1 :x: | spotbugs | 2m 23s | hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 51m 29s | | | Reason | Tests | |---:|:--| | FindBugs | module:hbase-server | | | Unread field:field be static? At SyncFutureCache.java:[line 44] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3371 | | JIRA Issue | HBASE-25984 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux d11702d0593c 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | spotbugs | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/artifact/yetus-general-check/output/new-spotbugs-hbase-server.html | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3371: HBASE-25984: Avoid premature reuse of sync futures in FSHLog [DRAFT]
Apache-HBase commented on pull request #3371: URL: https://github.com/apache/hbase/pull/3371#issuecomment-858149359 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 8s | master passed | | +1 :green_heart: | compile | 1m 11s | master passed | | +1 :green_heart: | shadedjars | 8m 11s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 41s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 16s | the patch passed | | +1 :green_heart: | compile | 1m 12s | the patch passed | | +1 :green_heart: | javac | 1m 12s | the patch passed | | +1 :green_heart: | shadedjars | 8m 9s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 40s | hbase-server generated 1 new + 85 unchanged - 0 fixed = 86 total (was 85) | ||| _ Other Tests _ | | -1 :x: | unit | 8m 49s | hbase-server in the patch failed. | | | | 39m 14s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3371 | | JIRA Issue | HBASE-25984 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 71dbc29a6f98 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-11.0.10+9 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/artifact/yetus-jdk11-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/testReport/ | | Max. process+thread count | 765 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3371: HBASE-25984: Avoid premature reuse of sync futures in FSHLog [DRAFT]
Apache-HBase commented on pull request #3371: URL: https://github.com/apache/hbase/pull/3371#issuecomment-858148811 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 0s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 59s | master passed | | +1 :green_heart: | compile | 1m 1s | master passed | | +1 :green_heart: | shadedjars | 8m 14s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 40s | the patch passed | | +1 :green_heart: | compile | 0m 59s | the patch passed | | +1 :green_heart: | javac | 0m 59s | the patch passed | | +1 :green_heart: | shadedjars | 8m 15s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 36s | hbase-server generated 1 new + 20 unchanged - 0 fixed = 21 total (was 20) | ||| _ Other Tests _ | | -1 :x: | unit | 7m 57s | hbase-server in the patch failed. | | | | 37m 50s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3371 | | JIRA Issue | HBASE-25984 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux e64d0299f9a7 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/testReport/ | | Max. process+thread count | 915 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3371: HBASE-25984: Avoid premature reuse of sync futures in FSHLog [DRAFT]
Apache-HBase commented on pull request #3371: URL: https://github.com/apache/hbase/pull/3371#issuecomment-858126807 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 31s | master passed | | +1 :green_heart: | compile | 4m 38s | master passed | | +1 :green_heart: | checkstyle | 1m 11s | master passed | | +1 :green_heart: | spotbugs | 2m 26s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 59s | the patch passed | | +1 :green_heart: | compile | 3m 52s | the patch passed | | +1 :green_heart: | javac | 3m 52s | the patch passed | | -0 :warning: | checkstyle | 1m 11s | hbase-server: The patch generated 2 new + 12 unchanged - 0 fixed = 14 total (was 12) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 20m 22s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | -1 :x: | spotbugs | 2m 32s | hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | -1 :x: | asflicense | 0m 16s | The patch generated 2 ASF License warnings. | | | | 55m 2s | | | Reason | Tests | |---:|:--| | FindBugs | module:hbase-server | | | Unread field:field be static? At SyncFutureCache.java:[line 27] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3371 | | JIRA Issue | HBASE-25984 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux f88e618babcd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | spotbugs | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/artifact/yetus-general-check/output/new-spotbugs-hbase-server.html | | asflicense | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/artifact/yetus-general-check/output/patch-asflicense-problems.txt | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3371: HBASE-25984: Avoid premature reuse of sync futures in FSHLog [DRAFT]
Apache-HBase commented on pull request #3371: URL: https://github.com/apache/hbase/pull/3371#issuecomment-858118713 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 38s | master passed | | +1 :green_heart: | compile | 1m 13s | master passed | | +1 :green_heart: | shadedjars | 8m 11s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 44s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 19s | the patch passed | | +1 :green_heart: | compile | 1m 13s | the patch passed | | +1 :green_heart: | javac | 1m 13s | the patch passed | | -1 :x: | shadedjars | 6m 39s | patch has 10 errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 41s | hbase-server generated 1 new + 85 unchanged - 0 fixed = 86 total (was 85) | ||| _ Other Tests _ | | -1 :x: | unit | 8m 52s | hbase-server in the patch failed. | | | | 38m 25s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3371 | | JIRA Issue | HBASE-25984 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 9746a8806cd5 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-11.0.10+9 | | shadedjars | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/artifact/yetus-jdk11-hadoop3-check/output/patch-shadedjars.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/artifact/yetus-jdk11-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/testReport/ | | Max. process+thread count | 753 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3371: HBASE-25984: Avoid premature reuse of sync futures in FSHLog [DRAFT]
Apache-HBase commented on pull request #3371: URL: https://github.com/apache/hbase/pull/3371#issuecomment-858117335 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 57s | master passed | | +1 :green_heart: | compile | 1m 2s | master passed | | +1 :green_heart: | shadedjars | 8m 15s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 39s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 39s | the patch passed | | +1 :green_heart: | compile | 0m 59s | the patch passed | | +1 :green_heart: | javac | 0m 59s | the patch passed | | -1 :x: | shadedjars | 6m 32s | patch has 10 errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 37s | hbase-server generated 1 new + 20 unchanged - 0 fixed = 21 total (was 20) | ||| _ Other Tests _ | | -1 :x: | unit | 7m 51s | hbase-server in the patch failed. | | | | 35m 27s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3371 | | JIRA Issue | HBASE-25984 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 3e9d363cf5c7 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | shadedjars | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/artifact/yetus-jdk8-hadoop3-check/output/patch-shadedjars.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/testReport/ | | Max. process+thread count | 1003 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3371/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on a change in pull request #3371: HBASE-25984: Avoid premature reuse of sync futures in FSHLog [DRAFT]
bharathv commented on a change in pull request #3371: URL: https://github.com/apache/hbase/pull/3371#discussion_r648697770 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AsyncFSWAL.java ## @@ -263,6 +263,14 @@ public AsyncFSWAL(FileSystem fs, Abortable abortable, Path rootDir, String logDi DEFAULT_ASYNC_WAL_WAIT_ON_SHUTDOWN_IN_SECONDS); } + /** + * Helper that marks the future as DONE and offers it back to the cache. + */ + private void markFutureDoneAndOffer(SyncFuture future, long txid, Throwable t) { +future.done(txid, t); +syncFutureCache.offer(future); Review comment: This patch in the current form doesn't get rid of future overwrites as it does not seem to cause any issues in AsyncWAL case (based on code reading), but if the reviewers think we should do that, I can refactor accordingly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3370: HBASE-25739 TableSkewCostFunction need to use aggregated deviation
Apache-HBase commented on pull request #3370: URL: https://github.com/apache/hbase/pull/3370#issuecomment-858110699 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 42s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 30s | master passed | | +1 :green_heart: | compile | 0m 23s | master passed | | +1 :green_heart: | shadedjars | 9m 23s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 19s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 5s | the patch passed | | +1 :green_heart: | compile | 0m 24s | the patch passed | | +1 :green_heart: | javac | 0m 24s | the patch passed | | +1 :green_heart: | shadedjars | 9m 41s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 29m 19s | hbase-balancer in the patch failed. | | | | 62m 23s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3370 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 8e4b5a98dde6 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-balancer.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/2/testReport/ | | Max. process+thread count | 327 (vs. ulimit of 3) | | modules | C: hbase-balancer U: hbase-balancer | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3370: HBASE-25739 TableSkewCostFunction need to use aggregated deviation
Apache-HBase commented on pull request #3370: URL: https://github.com/apache/hbase/pull/3370#issuecomment-858110120 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 6s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 20s | master passed | | +1 :green_heart: | compile | 0m 19s | master passed | | +1 :green_heart: | shadedjars | 9m 1s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 16s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 3s | the patch passed | | +1 :green_heart: | compile | 0m 20s | the patch passed | | +1 :green_heart: | javac | 0m 20s | the patch passed | | +1 :green_heart: | shadedjars | 9m 5s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 15s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 31m 29s | hbase-balancer in the patch failed. | | | | 61m 26s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3370 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux d49d7be27838 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-balancer.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/2/testReport/ | | Max. process+thread count | 230 (vs. ulimit of 3) | | modules | C: hbase-balancer U: hbase-balancer | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/2/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25984) FSHLog WAL lockup with sync future reuse [RS deadlock]
[ https://issues.apache.org/jira/browse/HBASE-25984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360389#comment-17360389 ] Andrew Kyle Purtell commented on HBASE-25984: - FWIW I would prefer we use ThreadLocals only when there is no reasonable alternative. I don't think that bar is reached here, because as the PR demonstrates, a shared cache can work. I'm familiar with this work and some microbenchmarks done on the result (for branch-1, though) where the shared cache approach does not hurt performance and in fact produces a small performance benefit. Let me hold back on further comment until we have microbenchmarks comparing thread local vs shared cache approaches for async WAL. > FSHLog WAL lockup with sync future reuse [RS deadlock] > -- > > Key: HBASE-25984 > URL: https://issues.apache.org/jira/browse/HBASE-25984 > Project: HBase > Issue Type: Bug > Components: regionserver, wal >Affects Versions: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.4.5 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Critical > Labels: deadlock, hang > Attachments: HBASE-25984-unit-test.patch > > > We use FSHLog as the WAL implementation (branch-1 based) and under heavy load > we noticed the WAL system gets locked up due to a subtle bug involving racy > code with sync future reuse. This bug applies to all FSHLog implementations > across branches. > Symptoms: > On heavily loaded clusters with large write load we noticed that the region > servers are hanging abruptly with filled up handler queues and stuck MVCC > indicating appends/syncs not making any progress. > {noformat} > WARN [8,queue=9,port=60020] regionserver.MultiVersionConcurrencyControl - > STUCK for : 296000 millis. > MultiVersionConcurrencyControl{readPoint=172383686, writePoint=172383690, > regionName=1ce4003ab60120057734ffe367667dca} > WARN [6,queue=2,port=60020] regionserver.MultiVersionConcurrencyControl - > STUCK for : 296000 millis. > MultiVersionConcurrencyControl{readPoint=171504376, writePoint=171504381, > regionName=7c441d7243f9f504194dae6bf2622631} > {noformat} > All the handlers are stuck waiting for the sync futures and timing out. > {noformat} > java.lang.Object.wait(Native Method) > > org.apache.hadoop.hbase.regionserver.wal.SyncFuture.get(SyncFuture.java:183) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog.blockOnSync(FSHLog.java:1509) > . > {noformat} > Log rolling is stuck because it was unable to attain a safe point > {noformat} >java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) > org.apache.hadoop.hbase.regionserver.wal.FSHLog$SafePointZigZagLatch.waitSafePoint(FSHLog.java:1799) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog.replaceWriter(FSHLog.java:900) > {noformat} > and the Ring buffer consumer thinks that there are some outstanding syncs > that need to finish.. > {noformat} > > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.attainSafePoint(FSHLog.java:2031) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1999) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1857) > {noformat} > On the other hand, SyncRunner threads are idle and just waiting for work > implying that there are no pending SyncFutures that need to be run > {noformat} >sun.misc.Unsafe.park(Native Method) > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1297) > java.lang.Thread.run(Thread.java:748) > {noformat} > Overall the WAL system is dead locked and could make no progress until it was > aborted. I got to the bottom of this issue and have a patch that can fix it > (more details in the comments due to word limit in the description). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25984) FSHLog WAL lockup with sync future reuse [RS deadlock]
[ https://issues.apache.org/jira/browse/HBASE-25984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360383#comment-17360383 ] Bharath Vissapragada commented on HBASE-25984: -- After reading the AsyncFSWAL implementation, I think the overwrites of the futures are possible but it does not cause a deadlock because of the way the safe point is attained. I uploaded a draft patch that removes the usage of ThreadLocals for both the WAL implementations. It only matters for the FSHLog implementation but in general it seems risky to use ThreadLocals that are prone to overwrites. [~zhangduo] Any thoughts? > FSHLog WAL lockup with sync future reuse [RS deadlock] > -- > > Key: HBASE-25984 > URL: https://issues.apache.org/jira/browse/HBASE-25984 > Project: HBase > Issue Type: Bug > Components: regionserver, wal >Affects Versions: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.4.5 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Critical > Labels: deadlock, hang > Attachments: HBASE-25984-unit-test.patch > > > We use FSHLog as the WAL implementation (branch-1 based) and under heavy load > we noticed the WAL system gets locked up due to a subtle bug involving racy > code with sync future reuse. This bug applies to all FSHLog implementations > across branches. > Symptoms: > On heavily loaded clusters with large write load we noticed that the region > servers are hanging abruptly with filled up handler queues and stuck MVCC > indicating appends/syncs not making any progress. > {noformat} > WARN [8,queue=9,port=60020] regionserver.MultiVersionConcurrencyControl - > STUCK for : 296000 millis. > MultiVersionConcurrencyControl{readPoint=172383686, writePoint=172383690, > regionName=1ce4003ab60120057734ffe367667dca} > WARN [6,queue=2,port=60020] regionserver.MultiVersionConcurrencyControl - > STUCK for : 296000 millis. > MultiVersionConcurrencyControl{readPoint=171504376, writePoint=171504381, > regionName=7c441d7243f9f504194dae6bf2622631} > {noformat} > All the handlers are stuck waiting for the sync futures and timing out. > {noformat} > java.lang.Object.wait(Native Method) > > org.apache.hadoop.hbase.regionserver.wal.SyncFuture.get(SyncFuture.java:183) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog.blockOnSync(FSHLog.java:1509) > . > {noformat} > Log rolling is stuck because it was unable to attain a safe point > {noformat} >java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) > org.apache.hadoop.hbase.regionserver.wal.FSHLog$SafePointZigZagLatch.waitSafePoint(FSHLog.java:1799) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog.replaceWriter(FSHLog.java:900) > {noformat} > and the Ring buffer consumer thinks that there are some outstanding syncs > that need to finish.. > {noformat} > > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.attainSafePoint(FSHLog.java:2031) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1999) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1857) > {noformat} > On the other hand, SyncRunner threads are idle and just waiting for work > implying that there are no pending SyncFutures that need to be run > {noformat} >sun.misc.Unsafe.park(Native Method) > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1297) > java.lang.Thread.run(Thread.java:748) > {noformat} > Overall the WAL system is dead locked and could make no progress until it was > aborted. I got to the bottom of this issue and have a patch that can fix it > (more details in the comments due to word limit in the description). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3370: HBASE-25739 TableSkewCostFunction need to use aggregated deviation
Apache-HBase commented on pull request #3370: URL: https://github.com/apache/hbase/pull/3370#issuecomment-858095173 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 47s | master passed | | +1 :green_heart: | compile | 0m 32s | master passed | | +1 :green_heart: | checkstyle | 0m 14s | master passed | | +1 :green_heart: | spotbugs | 0m 34s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 36s | the patch passed | | +1 :green_heart: | compile | 0m 32s | the patch passed | | -0 :warning: | javac | 0m 32s | hbase-balancer generated 1 new + 17 unchanged - 0 fixed = 18 total (was 17) | | +1 :green_heart: | checkstyle | 0m 14s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 19m 11s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 0m 45s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 11s | The patch does not generate ASF License warnings. | | | | 38m 18s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3370 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 862a76c7b074 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/2/artifact/yetus-general-check/output/diff-compile-javac-hbase-balancer.txt | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-balancer U: hbase-balancer | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/2/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv opened a new pull request #3371: HBASE-25984: Avoid premature reuse of sync futures in FSHLog [DRAFT]
bharathv opened a new pull request #3371: URL: https://github.com/apache/hbase/pull/3371 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25969) Cleanup netty-all transitive includes
[ https://issues.apache.org/jira/browse/HBASE-25969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360363#comment-17360363 ] Hudson commented on HBASE-25969: Results for branch master [build #319 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/319/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/319/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/319/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/319/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Cleanup netty-all transitive includes > - > > Key: HBASE-25969 > URL: https://issues.apache.org/jira/browse/HBASE-25969 > Project: HBase > Issue Type: Sub-task >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.5 > > > Our releases include lib/netty-all.jar as a transitive include from hadoop. > -Purge.- > ... looks like I can't purge the transitive netty-all includes just yet, not > w/o moving MR out of hbase core. The transitively included netty-all w/ the > old version is needed to run the tests that put up a MR cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25981) JVM crash when displaying regionserver UI
[ https://issues.apache.org/jira/browse/HBASE-25981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360364#comment-17360364 ] Hudson commented on HBASE-25981: Results for branch master [build #319 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/319/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/319/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/319/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/319/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > JVM crash when displaying regionserver UI > - > > Key: HBASE-25981 > URL: https://issues.apache.org/jira/browse/HBASE-25981 > Project: HBase > Issue Type: Bug > Components: rpc, UI >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: hs_err_pid116190.log-gha-data-hbase-cat0085-ui > > > The MonitoredRPCHandlerImpl refers to the params of a request, and will show > them when we call 'toJson()'. But the running RPC call may be cleaned up and > the ByteBuffer be released before the displaying in UI. We need to let the > life cycle of RPC status monitor be inner the life cycle of RPC. > {code:java} > J 19267 C2 > org.apache.hbase.thirdparty.com.google.protobuf.TextFormat$Printer.printMessage(Lorg/apache/hbase/thirdparty/com/google/protobuf/MessageOrBuilder;Lorg/apache/hbase/thirdparty/com/google/protobuf/TextFormat$TextGenerator;)V > (73 bytes) @ 0x7f1ac7e54640 [0x7f1ac7e53f60+0x6e0] > J 20932 C2 > org.apache.hbase.thirdparty.com.google.protobuf.TextFormat$Printer.print(Lorg/apache/hbase/thirdparty/com/google/protobuf/MessageOrBuilder;Lorg/apache/hbase/thirdparty/com/google/protobuf/TextFormat$TextGenerator;)V > (34 bytes) @ 0x7f1ac68ab9b0 [0x7f1ac68ab880+0x130] > J 21843 C1 > org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessage.toString()Ljava/lang/String; > (8 bytes) @ 0x7f1ac620e14c [0x7f1ac620dba0+0x5ac] > J 21835 C1 > org.apache.hadoop.hbase.monitoring.MonitoredRPCHandlerImpl.toMap()Ljava/util/Map; > (240 bytes) @ 0x7f1ac5009bf4 [0x7f1ac50071c0+0x2a34] > J 21833 C1 > org.apache.hadoop.hbase.monitoring.MonitoredRPCHandlerImpl.toJSON()Ljava/lang/String; > (5 bytes) @ 0x7f1ac74efb74 [0x7f1ac74efaa0+0xd4] > j > org.apache.hadoop.hbase.tmpl.common.TaskMonitorTmplImpl.renderNoFlush(Ljava/io/Writer;)V+259 > j > org.apache.hadoop.hbase.tmpl.common.TaskMonitorTmpl.renderNoFlush(Ljava/io/Writer;)V+16 > j > org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmplImpl.renderNoFlush(Ljava/io/Writer;)V+129 > {code} > [^hs_err_pid116190.log-gha-data-hbase-cat0085-ui] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25769) Update default weight of cost functions
[ https://issues.apache.org/jira/browse/HBASE-25769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360329#comment-17360329 ] Clara Xiong commented on HBASE-25769: - We cannot set bytable on because we are dominated by a couple of large tables. If we only move one table at a time, during the move, it will create significant impact on the balance sensitive table scans on other others too. We have meta table which has a couple of regions/replica fewer than the node count, like many other users. > Update default weight of cost functions > --- > > Key: HBASE-25769 > URL: https://issues.apache.org/jira/browse/HBASE-25769 > Project: HBase > Issue Type: Sub-task > Components: Balancer >Reporter: Clara Xiong >Priority: Major > > In production, we have seen some critical big tables that handle majority of > the load. Table Skew is becoming more important. With the update of table > skew function, balancer finally works for large table distribution on large > cluster. We should increase the weight from 35 to a level comparable to > region count skew: 500. We can even push further to replace region count skew > by table skew since the latter works in the same way and account for region > distribution per node. > Another weight we found helpful to increase is for store file size cost > function. Ideally if normalizer works perfectly, we don't need to worry about > it since region count skew would have accounted for it. But we are often in a > situation it doesn't. Store file distribution needs to be given more way as > accommodation. we tested changing it from 5 to 200 and it works fine. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25666) Explain why balancer is skipping runs
[ https://issues.apache.org/jira/browse/HBASE-25666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360324#comment-17360324 ] Clara Xiong commented on HBASE-25666: - I reopened this Jira to add more logging but instead opened a new jira. Sorry. This Jira is good for now. > Explain why balancer is skipping runs > - > > Key: HBASE-25666 > URL: https://issues.apache.org/jira/browse/HBASE-25666 > Project: HBase > Issue Type: Improvement > Components: Balancer, master, UI >Reporter: Clara Xiong >Assignee: Zhuoyue Huang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.4 > > > Balancer needs to find the balance of keeping cluster balanced and stable. > There is a configuration minCostNeedBalance as the threshold of imbalance to > try to balance. Since we use a single score to combine all the factors we > consider, it is hard for operation to understand why balancer is "stuck". We > should add to master-status UI to show balancer is skipping runs and explain > all factors considered, such as the weight and cost of all cost functions. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25981) JVM crash when displaying regionserver UI
[ https://issues.apache.org/jira/browse/HBASE-25981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360316#comment-17360316 ] Hudson commented on HBASE-25981: Results for branch branch-2.4 [build #138 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/138/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/138/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/138/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/138/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.4/138/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > JVM crash when displaying regionserver UI > - > > Key: HBASE-25981 > URL: https://issues.apache.org/jira/browse/HBASE-25981 > Project: HBase > Issue Type: Bug > Components: rpc, UI >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: hs_err_pid116190.log-gha-data-hbase-cat0085-ui > > > The MonitoredRPCHandlerImpl refers to the params of a request, and will show > them when we call 'toJson()'. But the running RPC call may be cleaned up and > the ByteBuffer be released before the displaying in UI. We need to let the > life cycle of RPC status monitor be inner the life cycle of RPC. > {code:java} > J 19267 C2 > org.apache.hbase.thirdparty.com.google.protobuf.TextFormat$Printer.printMessage(Lorg/apache/hbase/thirdparty/com/google/protobuf/MessageOrBuilder;Lorg/apache/hbase/thirdparty/com/google/protobuf/TextFormat$TextGenerator;)V > (73 bytes) @ 0x7f1ac7e54640 [0x7f1ac7e53f60+0x6e0] > J 20932 C2 > org.apache.hbase.thirdparty.com.google.protobuf.TextFormat$Printer.print(Lorg/apache/hbase/thirdparty/com/google/protobuf/MessageOrBuilder;Lorg/apache/hbase/thirdparty/com/google/protobuf/TextFormat$TextGenerator;)V > (34 bytes) @ 0x7f1ac68ab9b0 [0x7f1ac68ab880+0x130] > J 21843 C1 > org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessage.toString()Ljava/lang/String; > (8 bytes) @ 0x7f1ac620e14c [0x7f1ac620dba0+0x5ac] > J 21835 C1 > org.apache.hadoop.hbase.monitoring.MonitoredRPCHandlerImpl.toMap()Ljava/util/Map; > (240 bytes) @ 0x7f1ac5009bf4 [0x7f1ac50071c0+0x2a34] > J 21833 C1 > org.apache.hadoop.hbase.monitoring.MonitoredRPCHandlerImpl.toJSON()Ljava/lang/String; > (5 bytes) @ 0x7f1ac74efb74 [0x7f1ac74efaa0+0xd4] > j > org.apache.hadoop.hbase.tmpl.common.TaskMonitorTmplImpl.renderNoFlush(Ljava/io/Writer;)V+259 > j > org.apache.hadoop.hbase.tmpl.common.TaskMonitorTmpl.renderNoFlush(Ljava/io/Writer;)V+16 > j > org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmplImpl.renderNoFlush(Ljava/io/Writer;)V+129 > {code} > [^hs_err_pid116190.log-gha-data-hbase-cat0085-ui] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25729) Upgrade to latest hbase-thirdparty
[ https://issues.apache.org/jira/browse/HBASE-25729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360312#comment-17360312 ] Pankaj Kumar commented on HBASE-25729: -- I too feel it is little risky to upgrade thirdparty-3.5.1 in 2.3 & 2.4 release line, but pb & guava are upgraded to minor versions, so IMO we can go for 3.5.1. {quote} I now think a 3.4.2, just to leave out the pb and guava bumps, not worth the effort. {quote} Not much worth until there is strong reason. > Upgrade to latest hbase-thirdparty > -- > > Key: HBASE-25729 > URL: https://issues.apache.org/jira/browse/HBASE-25729 > Project: HBase > Issue Type: Sub-task > Components: build, thirdparty >Affects Versions: 2.4.2 >Reporter: Andrew Kyle Purtell >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.5 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25729) Upgrade to latest hbase-thirdparty
[ https://issues.apache.org/jira/browse/HBASE-25729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360303#comment-17360303 ] Michael Stack commented on HBASE-25729: --- [~apurtell] and [~pankajkumar] See above when you have a chance. Looking for input... Thanks. > Upgrade to latest hbase-thirdparty > -- > > Key: HBASE-25729 > URL: https://issues.apache.org/jira/browse/HBASE-25729 > Project: HBase > Issue Type: Sub-task > Components: build, thirdparty >Affects Versions: 2.4.2 >Reporter: Andrew Kyle Purtell >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.5 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25990) Add donated buildbots for jenkins
[ https://issues.apache.org/jira/browse/HBASE-25990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360300#comment-17360300 ] Michael Stack commented on HBASE-25990: --- After running the script, reached out to Gavin on slack and he verified he could login and sudo. He just offered that he has all he needs now > Add donated buildbots for jenkins > - > > Key: HBASE-25990 > URL: https://issues.apache.org/jira/browse/HBASE-25990 > Project: HBase > Issue Type: Task > Components: build >Reporter: Michael Stack >Priority: Major > > This issue is for keeping notes on how to add a donated buildbot to our > apache build. > My employer donated budget (I badly under-estimated cost but whatever...). > This issue is about adding 5 GCP nodes. > There is this page up on apache on donating machines for build > https://infra.apache.org/hosting-external-agent.html It got me some of the > ways... at least as far as the bit about mailing root@a.o(nada). > At [~zhangduo]'s encouragement -- he has been this route already adding in > the xiaomi donation -- I filed a JIRA after deploying a machine on GCP, > INFRA-21973. > I then reached out on slack and the gentleman Gavin MacDonald picked up the > task. > He told me run this script on all hosts after making edits (comment out line > #39 where we set hostname -- doesn't work): > https://github.com/apache/cassandra-builds/blob/trunk/jenkins-dsl/agent-install.sh > (For more context on the above script and as a good backgrounder, see the > nice C* page on how to do this setup: > https://github.com/apache/cassandra-builds/blob/trunk/ASF-jenkins-agents.md) > After doing the above, I had to do a visudo on each host to add a line for an > infra account to allow passwordless access. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ben-manes commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
ben-manes commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r648556289 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: I would switch to `map.remove(cacheKey, cb)` so that a race doesn't discard a new mapping. If my naive reading is correct, this `map.remove(cacheKey)` would already occur after the `cb.release()`, so this may not be necessary. That could mean that a new block was computed, so this remove discards the wrong block mistakenly. You might not need the map removal here if you can rely on the release being performed after the map operation completed. https://github.com/apache/hbase/blob/947c03cf7249dec09162da445df7d36b8dbd4bfc/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java#L246-L252 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ben-manes commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
ben-manes commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r648556289 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: I would switch to `map.remove(cacheKey, cb)` so that a race doesn't discard a new mapping. If my naive reading is correct, this `map.remove(cacheKey)` would already occur after the `cb.release()`, so this may not be necessary. That could mean that a new block was computed, so the remove discards that mistakenly. You might not need the map removal here if you can rely on the release being performed after the map operation completed. https://github.com/apache/hbase/blob/947c03cf7249dec09162da445df7d36b8dbd4bfc/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java#L246-L252 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r648542799 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: @saintstack @anoopsjohn @ben-manes How about this one? I am yet to benchmark this and perform chaos testing with this, but before I do it, just wanted to see if you are aligned with this rough patch. ``` diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java index 3e5ba1d19c..bb2b394ccd 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java @@ -39,6 +39,7 @@ import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.util.ClassSize; import org.apache.hadoop.util.StringUtils; +import org.apache.hbase.thirdparty.io.netty.util.IllegalReferenceCountException; import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -510,14 +511,15 @@ public class LruBlockCache implements FirstLevelBlockCache { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -LruCachedBlock cb = map.computeIfPresent(cacheKey, (key, val) -> { - // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside - // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove - // the block and release, then we're retaining a block with refCnt=0 which is disallowed. - // see HBASE-22422. - val.getBuffer().retain(); - return val; -}); +LruCachedBlock cb = map.get(cacheKey); +if (cb != null) { + try { +cb.getBuffer().retain(); + } catch (IllegalReferenceCountException e) { +// map.remove(cacheKey); ==> not required here +cb = null; + } +} if (cb == null) { if (!repeat && updateCacheMetrics) { stats.miss(caching, cacheKey.isPrimary(), cacheKey.getBlockType()); ``` And this perf improvement is to be followed by all L1 caching, something we can take up as a follow up task. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r648564135 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: Sounds good, `map.remove(cacheKey, cb)` too should not be required in this case. Thanks @ben-manes -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25969) Cleanup netty-all transitive includes
[ https://issues.apache.org/jira/browse/HBASE-25969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360272#comment-17360272 ] Hudson commented on HBASE-25969: Results for branch branch-2 [build #272 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/272/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/272/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/272/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/272/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/272/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Cleanup netty-all transitive includes > - > > Key: HBASE-25969 > URL: https://issues.apache.org/jira/browse/HBASE-25969 > Project: HBase > Issue Type: Sub-task >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.5 > > > Our releases include lib/netty-all.jar as a transitive include from hadoop. > -Purge.- > ... looks like I can't purge the transitive netty-all includes just yet, not > w/o moving MR out of hbase core. The transitively included netty-all w/ the > old version is needed to run the tests that put up a MR cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3370: HBASE-25739 TableSkewCostFunction need to use aggregated deviation
Apache-HBase commented on pull request #3370: URL: https://github.com/apache/hbase/pull/3370#issuecomment-857942546 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 3m 37s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 51s | master passed | | +1 :green_heart: | compile | 0m 23s | master passed | | +1 :green_heart: | shadedjars | 8m 48s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 23s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 44s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed | | +1 :green_heart: | javac | 0m 23s | the patch passed | | +1 :green_heart: | shadedjars | 8m 36s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 18s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 13m 2s | hbase-balancer in the patch failed. | | | | 46m 21s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3370 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 9ca9b35e90a9 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-11.0.10+9 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-balancer.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/1/testReport/ | | Max. process+thread count | 316 (vs. ulimit of 3) | | modules | C: hbase-balancer U: hbase-balancer | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3350: HBASE-25950 add basic compaction server metric
Apache-HBase commented on pull request #3350: URL: https://github.com/apache/hbase/pull/3350#issuecomment-857938738 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 12s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 20s | HBASE-25714 passed | | +1 :green_heart: | compile | 3m 23s | HBASE-25714 passed | | +1 :green_heart: | shadedjars | 9m 22s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 32s | HBASE-25714 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 16s | the patch passed | | +1 :green_heart: | compile | 3m 11s | the patch passed | | +1 :green_heart: | javac | 3m 11s | the patch passed | | +1 :green_heart: | shadedjars | 9m 9s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 30s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 11s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 57s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 222m 39s | hbase-server in the patch passed. | | | | 268m 59s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3350/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3350 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 311558186e50 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / 997af0215c | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3350/3/testReport/ | | Max. process+thread count | 3032 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3350/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ben-manes commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
ben-manes commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r648556289 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: I would switch to `map.remove(cacheKey, cb)` so that a race doesn't discard a new mapping. If my naive reading is correct, this `map.remove(cacheKey)` would already occur before the `cb.release()`, so this may not be necessary. That could mean that a new block was computed, so the remove discards that mistakenly. You might not need the map removal here if you can rely on the release being performed after the map operation completed. https://github.com/apache/hbase/blob/947c03cf7249dec09162da445df7d36b8dbd4bfc/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java#L246-L252 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3370: HBASE-25739 TableSkewCostFunction need to use aggregated deviation
Apache-HBase commented on pull request #3370: URL: https://github.com/apache/hbase/pull/3370#issuecomment-857933746 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 0s | master passed | | +1 :green_heart: | compile | 0m 33s | master passed | | +1 :green_heart: | checkstyle | 0m 16s | master passed | | +1 :green_heart: | spotbugs | 0m 39s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 57s | the patch passed | | +1 :green_heart: | compile | 0m 34s | the patch passed | | -0 :warning: | javac | 0m 34s | hbase-balancer generated 1 new + 17 unchanged - 0 fixed = 18 total (was 17) | | +1 :green_heart: | checkstyle | 0m 13s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 55s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 0m 46s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 13s | The patch does not generate ASF License warnings. | | | | 38m 49s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3370 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 6d3fe8ec7a8d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | javac | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/1/artifact/yetus-general-check/output/diff-compile-javac-hbase-balancer.txt | | Max. process+thread count | 96 (vs. ulimit of 3) | | modules | C: hbase-balancer U: hbase-balancer | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3370: HBASE-25739 TableSkewCostFunction need to use aggregated deviation
Apache-HBase commented on pull request #3370: URL: https://github.com/apache/hbase/pull/3370#issuecomment-857933462 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 3s | master passed | | +1 :green_heart: | compile | 0m 20s | master passed | | +1 :green_heart: | shadedjars | 8m 17s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 18s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 40s | the patch passed | | +1 :green_heart: | compile | 0m 21s | the patch passed | | +1 :green_heart: | javac | 0m 21s | the patch passed | | +1 :green_heart: | shadedjars | 8m 17s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 17s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 11m 16s | hbase-balancer in the patch failed. | | | | 38m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3370 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a10463c3ca59 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-balancer.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/1/testReport/ | | Max. process+thread count | 355 (vs. ulimit of 3) | | modules | C: hbase-balancer U: hbase-balancer | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3370/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-25971) FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1-RC3
[ https://issues.apache.org/jira/browse/HBASE-25971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-25971. --- Resolution: Not A Problem Resolving as 'Not a problem'. This seems to have been some issue around building artifacts for RC testing. Subsequent builds worked (after HBASE-25989) > FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1-RC3 > -- > > Key: HBASE-25971 > URL: https://issues.apache.org/jira/browse/HBASE-25971 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Priority: Major > > This is in the log: > {code} > 2021-06-04 21:29:39,138 DEBUG [master/oss-master-1:16000:becomeActiveMaster] > ipc.ProtobufRpcEngine: Call: addBlock took 6ms > 2021-06-04 21:29:39,169 WARN [RS-EventLoopGroup-1-1] > concurrent.DefaultPromise: An exception was thrown by > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete() > java.lang.IllegalArgumentException: object is not an instance of declaring > class > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:69) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:343) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:425) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:184) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:419) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:477) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:472) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) > at > org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) > at > org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) > at > org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) > at > org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) > at > org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:470) > at > org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) > at > org.apache.hbase.thirdparty.io.netty.u
[jira] [Commented] (HBASE-25920) Support Hadoop 3.3.1
[ https://issues.apache.org/jira/browse/HBASE-25920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360255#comment-17360255 ] Michael Stack commented on HBASE-25920: --- TestLogRolling fails most of the time. I was able to get it to fail locally if I ran the unit test on repeat. Looking... > Support Hadoop 3.3.1 > > > Key: HBASE-25920 > URL: https://issues.apache.org/jira/browse/HBASE-25920 > Project: HBase > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0 > > > The Hadoop 3.3.1 is a big release, quite different from 3.3.0. > File this jira to track the support for Hadoop 3.3.1. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r648542799 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: @saintstack @anoopsjohn @ben-manes How about this one? I am yet to benchmark this and perform chaos testing with this, but before I do it, just wanted to see if you are aligned with this rough patch. ``` diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java index 3e5ba1d19c..bb2b394ccd 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java @@ -39,6 +39,7 @@ import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.util.ClassSize; import org.apache.hadoop.util.StringUtils; +import org.apache.hbase.thirdparty.io.netty.util.IllegalReferenceCountException; import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -510,14 +511,15 @@ public class LruBlockCache implements FirstLevelBlockCache { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -LruCachedBlock cb = map.computeIfPresent(cacheKey, (key, val) -> { - // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside - // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove - // the block and release, then we're retaining a block with refCnt=0 which is disallowed. - // see HBASE-22422. - val.getBuffer().retain(); - return val; -}); +LruCachedBlock cb = map.get(cacheKey); +if (cb != null) { + try { +cb.getBuffer().retain(); + } catch (IllegalReferenceCountException e) { +map.remove(cacheKey); +cb = null; + } +} if (cb == null) { if (!repeat && updateCacheMetrics) { stats.miss(caching, cacheKey.isPrimary(), cacheKey.getBlockType()); ``` And this perf improvement is to be followed by all L1 caching, something we can take up as a follow up task. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #3215: HBASE-25698 Fixing IllegalReferenceCountException when using TinyLfuBlockCache
virajjasani commented on a change in pull request #3215: URL: https://github.com/apache/hbase/pull/3215#discussion_r648542799 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/TinyLfuBlockCache.java ## @@ -158,7 +158,13 @@ public boolean containsBlock(BlockCacheKey cacheKey) { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -Cacheable value = cache.getIfPresent(cacheKey); +Cacheable value = cache.asMap().computeIfPresent(cacheKey, (blockCacheKey, cacheable) -> { + // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside + // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove + // the block and release, then we're retaining a block with refCnt=0 which is disallowed. + cacheable.retain(); + return cacheable; +}); Review comment: @saintstack @anoopsjohn @ben-manes How about this one? I am yet to benchmark this and perform chaos testing with this, but before I do it, just wanted to see if you are aligned with this rough patch. ``` diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java index 3e5ba1d19c..bb2b394ccd 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java @@ -39,6 +39,7 @@ import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.util.ClassSize; import org.apache.hadoop.util.StringUtils; +import org.apache.hbase.thirdparty.io.netty.util.IllegalReferenceCountException; import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -510,14 +511,15 @@ public class LruBlockCache implements FirstLevelBlockCache { @Override public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat, boolean updateCacheMetrics) { -LruCachedBlock cb = map.computeIfPresent(cacheKey, (key, val) -> { - // It will be referenced by RPC path, so increase here. NOTICE: Must do the retain inside - // this block. because if retain outside the map#computeIfPresent, the evictBlock may remove - // the block and release, then we're retaining a block with refCnt=0 which is disallowed. - // see HBASE-22422. - val.getBuffer().retain(); - return val; -}); +LruCachedBlock cb = map.get(cacheKey); +if (cb != null) { + try { +cb.getBuffer().retain(); + } catch (IllegalReferenceCountException e) { +map.remove(cacheKey); +cb = null; + } +} if (cb == null) { if (!repeat && updateCacheMetrics) { stats.miss(caching, cacheKey.isPrimary(), cacheKey.getBlockType()); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25990) Add donated buildbots for jenkins
[ https://issues.apache.org/jira/browse/HBASE-25990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360253#comment-17360253 ] Michael Stack commented on HBASE-25990: --- !https://lh5.googleusercontent.com/m8S_cqtqMMyUlkqCQfhYCr8G7rVtS2Zl_B1r1U1u9f1l-801AnPp8SA98lQ2_HvbYRN6xoipbNeD2nxMCRCzl4nmCKaXNGrSzXhkcc4aRdq4pQw1iiMsA78UBJamStQHyXQ1-482! > Add donated buildbots for jenkins > - > > Key: HBASE-25990 > URL: https://issues.apache.org/jira/browse/HBASE-25990 > Project: HBase > Issue Type: Task > Components: build >Reporter: Michael Stack >Priority: Major > > This issue is for keeping notes on how to add a donated buildbot to our > apache build. > My employer donated budget (I badly under-estimated cost but whatever...). > This issue is about adding 5 GCP nodes. > There is this page up on apache on donating machines for build > https://infra.apache.org/hosting-external-agent.html It got me some of the > ways... at least as far as the bit about mailing root@a.o(nada). > At [~zhangduo]'s encouragement -- he has been this route already adding in > the xiaomi donation -- I filed a JIRA after deploying a machine on GCP, > INFRA-21973. > I then reached out on slack and the gentleman Gavin MacDonald picked up the > task. > He told me run this script on all hosts after making edits (comment out line > #39 where we set hostname -- doesn't work): > https://github.com/apache/cassandra-builds/blob/trunk/jenkins-dsl/agent-install.sh > (For more context on the above script and as a good backgrounder, see the > nice C* page on how to do this setup: > https://github.com/apache/cassandra-builds/blob/trunk/ASF-jenkins-agents.md) > After doing the above, I had to do a visudo on each host to add a line for an > infra account to allow passwordless access. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25990) Add donated buildbots for jenkins
Michael Stack created HBASE-25990: - Summary: Add donated buildbots for jenkins Key: HBASE-25990 URL: https://issues.apache.org/jira/browse/HBASE-25990 Project: HBase Issue Type: Task Components: build Reporter: Michael Stack This issue is for keeping notes on how to add a donated buildbot to our apache build. My employer donated budget (I badly under-estimated cost but whatever...). This issue is about adding 5 GCP nodes. There is this page up on apache on donating machines for build https://infra.apache.org/hosting-external-agent.html It got me some of the ways... at least as far as the bit about mailing root@a.o(nada). At [~zhangduo]'s encouragement -- he has been this route already adding in the xiaomi donation -- I filed a JIRA after deploying a machine on GCP, INFRA-21973. I then reached out on slack and the gentleman Gavin MacDonald picked up the task. He told me run this script on all hosts after making edits (comment out line #39 where we set hostname -- doesn't work): https://github.com/apache/cassandra-builds/blob/trunk/jenkins-dsl/agent-install.sh (For more context on the above script and as a good backgrounder, see the nice C* page on how to do this setup: https://github.com/apache/cassandra-builds/blob/trunk/ASF-jenkins-agents.md) After doing the above, I had to do a visudo on each host to add a line for an infra account to allow passwordless access. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3368: HBASE-25989 FanOutOneBlockAsyncDFSOutput using shaded protobuf in hdf…
Apache-HBase commented on pull request #3368: URL: https://github.com/apache/hbase/pull/3368#issuecomment-857904181 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 6s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 17s | master passed | | +1 :green_heart: | compile | 0m 30s | master passed | | +1 :green_heart: | checkstyle | 0m 13s | master passed | | +1 :green_heart: | spotbugs | 0m 34s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 0s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed | | +1 :green_heart: | javac | 0m 28s | the patch passed | | +1 :green_heart: | checkstyle | 0m 11s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 20m 6s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 0m 43s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 40m 44s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3368/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3368 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 806dc8c12adf 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 85 (vs. ulimit of 3) | | modules | C: hbase-asyncfs U: hbase-asyncfs | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3368/1/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3369: HBASE-25859 Reference class incorrectly parses the protobuf magic marker
Apache-HBase commented on pull request #3369: URL: https://github.com/apache/hbase/pull/3369#issuecomment-857897042 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | docker | 9m 39s | Docker failed to build yetus/hbase:7ff737df0f. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/3369 | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3369/1/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3368: HBASE-25989 FanOutOneBlockAsyncDFSOutput using shaded protobuf in hdf…
Apache-HBase commented on pull request #3368: URL: https://github.com/apache/hbase/pull/3368#issuecomment-857896720 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 20s | master passed | | +1 :green_heart: | compile | 0m 22s | master passed | | +1 :green_heart: | shadedjars | 9m 15s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 19s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 6s | the patch passed | | +1 :green_heart: | compile | 0m 22s | the patch passed | | +1 :green_heart: | javac | 0m 22s | the patch passed | | +1 :green_heart: | shadedjars | 9m 17s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 17s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 38s | hbase-asyncfs in the patch passed. | | | | 31m 43s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3368/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3368 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux e9d96e81fbd2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3368/1/testReport/ | | Max. process+thread count | 628 (vs. ulimit of 3) | | modules | C: hbase-asyncfs U: hbase-asyncfs | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3368/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3368: HBASE-25989 FanOutOneBlockAsyncDFSOutput using shaded protobuf in hdf…
Apache-HBase commented on pull request #3368: URL: https://github.com/apache/hbase/pull/3368#issuecomment-857895440 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 1s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 28s | master passed | | +1 :green_heart: | compile | 0m 23s | master passed | | +1 :green_heart: | shadedjars | 8m 13s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 21s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 12s | the patch passed | | +1 :green_heart: | compile | 0m 22s | the patch passed | | +1 :green_heart: | javac | 0m 22s | the patch passed | | +1 :green_heart: | shadedjars | 8m 9s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 18s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 49s | hbase-asyncfs in the patch passed. | | | | 30m 28s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3368/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3368 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 2020908c536e 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7f7a293cb5 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3368/1/testReport/ | | Max. process+thread count | 650 (vs. ulimit of 3) | | modules | C: hbase-asyncfs U: hbase-asyncfs | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3368/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] clarax opened a new pull request #3370: HBASE-25739 TableSkewCostFunction need to use aggregated deviation
clarax opened a new pull request #3370: URL: https://github.com/apache/hbase/pull/3370 This is to reapply https://github.com/apache/hbase/pull/3067 since the trunk is significantly refactored. The old PR was merged but reverted for flaky test. Will work from here on the flaky tests. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] catalin-luca commented on pull request #3369: HBASE-25859 Reference class incorrectly parses the protobuf magic marker
catalin-luca commented on pull request #3369: URL: https://github.com/apache/hbase/pull/3369#issuecomment-857887955 This is a back port of: https://issues.apache.org/jira/browse/HBASE-25859 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] catalin-luca opened a new pull request #3369: HBASE-25859 Reference class incorrectly parses the protobuf magic marker
catalin-luca opened a new pull request #3369: URL: https://github.com/apache/hbase/pull/3369 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-filesystem] steveloughran commented on pull request #23: HBASE-25900. HBoss tests compile/failure against Hadoop 3.3.1
steveloughran commented on pull request #23: URL: https://github.com/apache/hbase-filesystem/pull/23#issuecomment-857882775 (i.e. the change in tagging was to let people editing it know of an external use) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-filesystem] steveloughran commented on pull request #23: HBASE-25900. HBoss tests compile/failure against Hadoop 3.3.1
steveloughran commented on pull request #23: URL: https://github.com/apache/hbase-filesystem/pull/23#issuecomment-857882046 I didn't mark it as evolving Until the change went in. Before then it was tagged as private I only marked it as limited public/evolving afterwards as * I'd found it was being used * the new design is intended to evolve without breaking the signature -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack commented on pull request #3368: HBASE-25989 FanOutOneBlockAsyncDFSOutput using shaded protobuf in hdf…
saintstack commented on pull request #3368: URL: https://github.com/apache/hbase/pull/3368#issuecomment-857876036 For example, https://stackoverflow.com/questions/7007831/instantiate-nested-static-class-using-class-forname A test for this is tough because would need to change assert based off what version of hadoop the test is running with. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack opened a new pull request #3368: HBASE-25989 FanOutOneBlockAsyncDFSOutput using shaded protobuf in hdf…
saintstack opened a new pull request #3368: URL: https://github.com/apache/hbase/pull/3368 …s 3.3+ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-25989) FanOutOneBlockAsyncDFSOutput using shaded protobuf in hdfs 3.3+
Michael Stack created HBASE-25989: - Summary: FanOutOneBlockAsyncDFSOutput using shaded protobuf in hdfs 3.3+ Key: HBASE-25989 URL: https://issues.apache.org/jira/browse/HBASE-25989 Project: HBase Issue Type: Sub-task Reporter: Michael Stack The parent added some fancy dancing to make it so on hadoop-3.3.0+ we'd use hadoops shaded protobuf rather than the non-relocated protobuf. When hdfs 3.3, the 'trick' is not working so we continue to use the unshaded protobuf. Fix is trivial. Found this testing the 3.3.1RC3. Hard to see because whether we use shaded or unshaded is at DEBUG level. If you set DEBUG level and run TestFanOutOneBlockAsyncDFSOutput with hdfs 3.3.1 RC candidate in place you'll see it uses the unshaded protobuf. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25987) Make SSL keystore type configurable for HBase ThriftServer
[ https://issues.apache.org/jira/browse/HBASE-25987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-25987: - Affects Version/s: 2.5.0 3.0.0-alpha-1 2.2.7 2.3.5 > Make SSL keystore type configurable for HBase ThriftServer > -- > > Key: HBASE-25987 > URL: https://issues.apache.org/jira/browse/HBASE-25987 > Project: HBase > Issue Type: Improvement > Components: Thrift >Affects Versions: 3.0.0-alpha-1, 2.2.7, 2.5.0, 2.3.5, 2.4.4 >Reporter: Mate Szalay-Beko >Assignee: Mate Szalay-Beko >Priority: Major > > We get the following exception, when trying to start Hbase Thrift Server in > http mode (hbase.regionserver.thrift.http=true) and use non default (not > "jks") keystore type: > > {noformat} > 2021-06-08 07:40:10,275 ERROR org.apache.hadoop.hbase.thrift.ThriftServer: > Cannot run ThriftServer > java.io.IOException: Invalid keystore format > at > sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:663) > at > sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56) > at > sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224) > at > sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70) > at java.security.KeyStore.load(KeyStore.java:1445) > at > org.eclipse.jetty.util.security.CertificateUtils.getKeyStore(CertificateUtils.java:54) > at > org.eclipse.jetty.util.ssl.SslContextFactory.loadKeyStore(SslContextFactory.java:1197) > at > org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:321) > at > org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:243) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) > at > org.eclipse.jetty.server.SslConnectionFactory.doStart(SslConnectionFactory.java:97) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) > at > org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321) > at > org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81) > at > org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at org.eclipse.jetty.server.Server.doStart(Server.java:401) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.apache.hadoop.hbase.thrift.ThriftServer$2.run(ThriftServer.java:861) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) > at > org.apache.hadoop.hbase.thrift.ThriftServer.run(ThriftServer.java:855) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at > org.apache.hadoop.hbase.thrift.ThriftServer.main(ThriftServer.java:882){noformat} > This problem appeared after we applied HBASE-25930 to our local HBase > version. It looks, we never had a parameter to specify the keystore type for > thrift http server. Before HBASE-25930, the keystore type used by the thrift > http server was accidentally defined based on the InfoServer (web ui) > configuration of "ssl.server.keystore.type". Before HBASE-25930, the > InfoServer was started first and it set the keystore type in the global > keystore manager, which setting propagated to the thrift http server too, > without any override. In HBASE-25930 the startup order changed, and the > thrift http server configuration happens before the InfoServer start, so we > lack this accidental configuration change now. > Given that we have independent keystore file path / password parameters > already for the thrift http server, the proper solution is to create a new > parameter also for the keystore type of the thrift http server: > *hbase.thrift.ssl.keystore.type* (defaulting to "jks"). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25987) Make SSL keystore type configurable for HBase ThriftServer
[ https://issues.apache.org/jira/browse/HBASE-25987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil resolved HBASE-25987. -- Resolution: Fixed Merged into master and base 2 branches. Thanks for the contribution, [~symat]! > Make SSL keystore type configurable for HBase ThriftServer > -- > > Key: HBASE-25987 > URL: https://issues.apache.org/jira/browse/HBASE-25987 > Project: HBase > Issue Type: Improvement > Components: Thrift >Affects Versions: 3.0.0-alpha-1, 2.2.7, 2.5.0, 2.3.5, 2.4.4 >Reporter: Mate Szalay-Beko >Assignee: Mate Szalay-Beko >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > > We get the following exception, when trying to start Hbase Thrift Server in > http mode (hbase.regionserver.thrift.http=true) and use non default (not > "jks") keystore type: > > {noformat} > 2021-06-08 07:40:10,275 ERROR org.apache.hadoop.hbase.thrift.ThriftServer: > Cannot run ThriftServer > java.io.IOException: Invalid keystore format > at > sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:663) > at > sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56) > at > sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224) > at > sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70) > at java.security.KeyStore.load(KeyStore.java:1445) > at > org.eclipse.jetty.util.security.CertificateUtils.getKeyStore(CertificateUtils.java:54) > at > org.eclipse.jetty.util.ssl.SslContextFactory.loadKeyStore(SslContextFactory.java:1197) > at > org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:321) > at > org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:243) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) > at > org.eclipse.jetty.server.SslConnectionFactory.doStart(SslConnectionFactory.java:97) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) > at > org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321) > at > org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81) > at > org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at org.eclipse.jetty.server.Server.doStart(Server.java:401) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.apache.hadoop.hbase.thrift.ThriftServer$2.run(ThriftServer.java:861) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) > at > org.apache.hadoop.hbase.thrift.ThriftServer.run(ThriftServer.java:855) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at > org.apache.hadoop.hbase.thrift.ThriftServer.main(ThriftServer.java:882){noformat} > This problem appeared after we applied HBASE-25930 to our local HBase > version. It looks, we never had a parameter to specify the keystore type for > thrift http server. Before HBASE-25930, the keystore type used by the thrift > http server was accidentally defined based on the InfoServer (web ui) > configuration of "ssl.server.keystore.type". Before HBASE-25930, the > InfoServer was started first and it set the keystore type in the global > keystore manager, which setting propagated to the thrift http server too, > without any override. In HBASE-25930 the startup order changed, and the > thrift http server configuration happens before the InfoServer start, so we > lack this accidental configuration change now. > Given that we have independent keystore file path / password parameters > already for the thrift http server, the proper solution is to create a new > parameter also for the keystore type of the thrift http server: > *hbase.thrift.ssl.keystore.type* (defaulting to "jks"). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25987) Make SSL keystore type configurable for HBase ThriftServer
[ https://issues.apache.org/jira/browse/HBASE-25987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HBASE-25987: - Fix Version/s: 2.4.5 2.3.6 2.5.0 3.0.0-alpha-1 > Make SSL keystore type configurable for HBase ThriftServer > -- > > Key: HBASE-25987 > URL: https://issues.apache.org/jira/browse/HBASE-25987 > Project: HBase > Issue Type: Improvement > Components: Thrift >Affects Versions: 3.0.0-alpha-1, 2.2.7, 2.5.0, 2.3.5, 2.4.4 >Reporter: Mate Szalay-Beko >Assignee: Mate Szalay-Beko >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > > We get the following exception, when trying to start Hbase Thrift Server in > http mode (hbase.regionserver.thrift.http=true) and use non default (not > "jks") keystore type: > > {noformat} > 2021-06-08 07:40:10,275 ERROR org.apache.hadoop.hbase.thrift.ThriftServer: > Cannot run ThriftServer > java.io.IOException: Invalid keystore format > at > sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:663) > at > sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56) > at > sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224) > at > sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70) > at java.security.KeyStore.load(KeyStore.java:1445) > at > org.eclipse.jetty.util.security.CertificateUtils.getKeyStore(CertificateUtils.java:54) > at > org.eclipse.jetty.util.ssl.SslContextFactory.loadKeyStore(SslContextFactory.java:1197) > at > org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:321) > at > org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:243) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) > at > org.eclipse.jetty.server.SslConnectionFactory.doStart(SslConnectionFactory.java:97) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) > at > org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321) > at > org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81) > at > org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at org.eclipse.jetty.server.Server.doStart(Server.java:401) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) > at > org.apache.hadoop.hbase.thrift.ThriftServer$2.run(ThriftServer.java:861) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878) > at > org.apache.hadoop.hbase.thrift.ThriftServer.run(ThriftServer.java:855) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at > org.apache.hadoop.hbase.thrift.ThriftServer.main(ThriftServer.java:882){noformat} > This problem appeared after we applied HBASE-25930 to our local HBase > version. It looks, we never had a parameter to specify the keystore type for > thrift http server. Before HBASE-25930, the keystore type used by the thrift > http server was accidentally defined based on the InfoServer (web ui) > configuration of "ssl.server.keystore.type". Before HBASE-25930, the > InfoServer was started first and it set the keystore type in the global > keystore manager, which setting propagated to the thrift http server too, > without any override. In HBASE-25930 the startup order changed, and the > thrift http server configuration happens before the InfoServer start, so we > lack this accidental configuration change now. > Given that we have independent keystore file path / password parameters > already for the thrift http server, the proper solution is to create a new > parameter also for the keystore type of the thrift http server: > *hbase.thrift.ssl.keystore.type* (defaulting to "jks"). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25988) Store the store file list by a file
[ https://issues.apache.org/jira/browse/HBASE-25988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360169#comment-17360169 ] Duo Zhang commented on HBASE-25988: --- Attached a design doc. PTAL. > Store the store file list by a file > --- > > Key: HBASE-25988 > URL: https://issues.apache.org/jira/browse/HBASE-25988 > Project: HBase > Issue Type: Sub-task > Components: HFile >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] wchevreuil merged pull request #3367: HBASE-25987 Make SSL keystore type configurable for HBase ThriftServer
wchevreuil merged pull request #3367: URL: https://github.com/apache/hbase/pull/3367 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-25391) Flush directly into data directory, skip rename when committing flush
[ https://issues.apache.org/jira/browse/HBASE-25391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil resolved HBASE-25391. -- Resolution: Fixed > Flush directly into data directory, skip rename when committing flush > - > > Key: HBASE-25391 > URL: https://issues.apache.org/jira/browse/HBASE-25391 > Project: HBase > Issue Type: Sub-task >Reporter: Tak-Lon (Stephen) Wu >Assignee: Wellington Chevreuil >Priority: Major > > {color:#00}When flushing memstore snapshot to HFile, we write it directly > to the data directory.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25950) add basic compaction server metric
[ https://issues.apache.org/jira/browse/HBASE-25950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yulin Niu resolved HBASE-25950. --- Resolution: Fixed > add basic compaction server metric > -- > > Key: HBASE-25950 > URL: https://issues.apache.org/jira/browse/HBASE-25950 > Project: HBase > Issue Type: Sub-task >Reporter: Yulin Niu >Assignee: Yulin Niu >Priority: Major > > the ServerLoad has RegionLoad, ReplicationLoadSource/ReplicationLoadSink > metrics, which is nothing to do with Compaction server. So, introduce > CompactionServerLoad, only has compaction related metircs -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25950) add basic compaction server metric
[ https://issues.apache.org/jira/browse/HBASE-25950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360153#comment-17360153 ] Yulin Niu commented on HBASE-25950: --- push to branch HBASE-25714, thanks [~zhangduo]'s reviewing > add basic compaction server metric > -- > > Key: HBASE-25950 > URL: https://issues.apache.org/jira/browse/HBASE-25950 > Project: HBase > Issue Type: Sub-task >Reporter: Yulin Niu >Assignee: Yulin Niu >Priority: Major > > the ServerLoad has RegionLoad, ReplicationLoadSource/ReplicationLoadSink > metrics, which is nothing to do with Compaction server. So, introduce > CompactionServerLoad, only has compaction related metircs -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] nyl3532016 merged pull request #3350: HBASE-25950 add basic compaction server metric
nyl3532016 merged pull request #3350: URL: https://github.com/apache/hbase/pull/3350 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3350: HBASE-25950 add basic compaction server metric
Apache-HBase commented on pull request #3350: URL: https://github.com/apache/hbase/pull/3350#issuecomment-857770624 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 10s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 10s | HBASE-25714 passed | | +1 :green_heart: | compile | 5m 50s | HBASE-25714 passed | | +1 :green_heart: | checkstyle | 1m 49s | HBASE-25714 passed | | +1 :green_heart: | spotbugs | 7m 25s | HBASE-25714 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 2s | the patch passed | | +1 :green_heart: | compile | 5m 45s | the patch passed | | +1 :green_heart: | cc | 5m 45s | the patch passed | | +1 :green_heart: | javac | 5m 45s | the patch passed | | -0 :warning: | checkstyle | 0m 27s | hbase-client: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 19m 53s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | hbaseprotoc | 2m 3s | the patch passed | | +1 :green_heart: | spotbugs | 8m 9s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 71m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3350/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3350 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool | | uname | Linux 2ad3f5c549cd 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / 997af0215c | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3350/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-client.txt | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3350/3/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3350: HBASE-25950 add basic compaction server metric
Apache-HBase commented on pull request #3350: URL: https://github.com/apache/hbase/pull/3350#issuecomment-857751323 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 7s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-25714 Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 3s | HBASE-25714 passed | | +1 :green_heart: | compile | 2m 15s | HBASE-25714 passed | | +1 :green_heart: | shadedjars | 8m 22s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 11s | HBASE-25714 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 3s | the patch passed | | +1 :green_heart: | compile | 2m 16s | the patch passed | | +1 :green_heart: | javac | 2m 16s | the patch passed | | +1 :green_heart: | shadedjars | 8m 23s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 10s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 48s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 26s | hbase-client in the patch passed. | | -1 :x: | unit | 12m 7s | hbase-server in the patch failed. | | | | 49m 38s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3350/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3350 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux e77120b2d3e4 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / 997af0215c | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3350/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3350/3/testReport/ | | Max. process+thread count | 671 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3350/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25988) Store the store file list by a file
[ https://issues.apache.org/jira/browse/HBASE-25988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25988: -- Summary: Store the store file list by a file (was: Store store file list by plain file) > Store the store file list by a file > --- > > Key: HBASE-25988 > URL: https://issues.apache.org/jira/browse/HBASE-25988 > Project: HBase > Issue Type: Sub-task > Components: HFile >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3297: HBASE-25905 Limit the shutdown time of WAL
Apache-HBase commented on pull request #3297: URL: https://github.com/apache/hbase/pull/3297#issuecomment-857727778 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 45s | master passed | | +1 :green_heart: | compile | 0m 58s | master passed | | +1 :green_heart: | shadedjars | 8m 1s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | master passed | | -0 :warning: | patch | 8m 51s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 49s | the patch passed | | +1 :green_heart: | compile | 1m 1s | the patch passed | | +1 :green_heart: | javac | 1m 1s | the patch passed | | +1 :green_heart: | shadedjars | 8m 23s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 148m 37s | hbase-server in the patch passed. | | | | 178m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3297/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3297 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 357357e552f2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 471e8159f0 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3297/3/testReport/ | | Max. process+thread count | 4255 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3297/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-25988) Store store file list by plain file
[ https://issues.apache.org/jira/browse/HBASE-25988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25988: -- Summary: Store store file list by plain file (was: Store hfile list by plain file) > Store store file list by plain file > --- > > Key: HBASE-25988 > URL: https://issues.apache.org/jira/browse/HBASE-25988 > Project: HBase > Issue Type: Sub-task > Components: HFile >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3297: HBASE-25905 Limit the shutdown time of WAL
Apache-HBase commented on pull request #3297: URL: https://github.com/apache/hbase/pull/3297#issuecomment-857723606 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 13s | master passed | | +1 :green_heart: | compile | 1m 10s | master passed | | +1 :green_heart: | shadedjars | 8m 11s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 39s | master passed | | -0 :warning: | patch | 9m 3s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 20s | the patch passed | | +1 :green_heart: | compile | 1m 12s | the patch passed | | +1 :green_heart: | javac | 1m 12s | the patch passed | | +1 :green_heart: | shadedjars | 8m 14s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 41s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 142m 21s | hbase-server in the patch passed. | | | | 173m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3297/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3297 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 0d83408568b1 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 471e8159f0 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3297/3/testReport/ | | Max. process+thread count | 4509 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3297/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a change in pull request #3350: HBASE-25950 add basic compaction server metric
Apache9 commented on a change in pull request #3350: URL: https://github.com/apache/hbase/pull/3350#discussion_r648332915 ## File path: hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/CompactionServerListTmpl.jamon ## @@ -72,28 +74,18 @@ Arrays.sort(serverNames); <%java> -int totalRegions = 0; -int totalRequestsPerSecond = 0; -int inconsistentNodeNum = 0; -String masterVersion = VersionInfo.getVersion(); -for (ServerName serverName: serverNames) { - -ServerMetrics sl = master.getServerManager().getLoad(serverName); + int inconsistentNodeNum = 0; + String masterVersion = VersionInfo.getVersion(); + for(ServerName serverName:serverNames) + { Review comment: Why a new line? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a change in pull request #3350: HBASE-25950 add basic compaction server metric
Apache9 commented on a change in pull request #3350: URL: https://github.com/apache/hbase/pull/3350#discussion_r648289833 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/CompactionServerMetricsBuilder.java ## @@ -0,0 +1,231 @@ +/** + * Copyright The Apache Software Foundation + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase; + +import edu.umd.cs.findbugs.annotations.Nullable; + +import java.util.ArrayList; +import java.util.List; +import org.apache.hadoop.hbase.util.Strings; +import org.apache.yetus.audience.InterfaceAudience; +import org.apache.hbase.thirdparty.com.google.common.base.Preconditions; +import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos; + + +@InterfaceAudience.Private +public final class CompactionServerMetricsBuilder { + + /** + * @param sn the server name + * @return a empty metrics + */ + public static CompactionServerMetrics of(ServerName sn) { +return newBuilder(sn).build(); + } + + public static CompactionServerMetrics of(ServerName sn, int versionNumber, String version) { +return newBuilder(sn).setVersionNumber(versionNumber).setVersion(version).build(); + } + + public static CompactionServerMetrics toCompactionServerMetrics(ServerName serverName, + ClusterStatusProtos.CompactionServerLoad serverLoadPB) { +return toCompactionServerMetrics(serverName, 0, "0.0.0", serverLoadPB); + } + + public static CompactionServerMetrics toCompactionServerMetrics(ServerName serverName, +int versionNumber, String version, ClusterStatusProtos.CompactionServerLoad serverLoadPB) { +return CompactionServerMetricsBuilder.newBuilder(serverName) + .setInfoServerPort(serverLoadPB.getInfoServerPort()) + .setCompactedCellCount(serverLoadPB.getCompactedCells()) + .setCompactingCellCount(serverLoadPB.getCompactingCells()) + .addCompactionTasks(serverLoadPB.getCompactionTasksList()) + .setTotalNumberOfRequests(serverLoadPB.getTotalNumberOfRequests()) + .setLastReportTimestamp(serverLoadPB.getReportStartTime()).setVersionNumber(versionNumber) + .setVersion(version).build(); + } + + public static CompactionServerMetricsBuilder newBuilder(ServerName sn) { +return new CompactionServerMetricsBuilder(sn); + } + + private final ServerName serverName; + private int versionNumber; + private String version = "0.0.0"; + private int infoServerPort; + private long compactingCellCount; + private long compactedCellCount; + private long totalNumberOfRequests; + @Nullable Review comment: Nullable but we initialize it with an empty array? ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/CompactionServerMetricsBuilder.java ## @@ -0,0 +1,231 @@ +/** + * Copyright The Apache Software Foundation + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase; + +import edu.umd.cs.findbugs.annotations.Nullable; + +import java.util.ArrayList; +import java.util.List; +import org.apache.hadoop.hbase.util.Strings; +import org.apache.yetus.audience.InterfaceAudience; +import org.apache.hbase.thirdparty.com.google.common.base.Preconditions; +import org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos; + + +@InterfaceAudience.Private +public final class CompactionServerMetricsBuilder { + + /** + * @param sn the server name + * @return a empty metrics + */ + public static Compactio
[jira] [Updated] (HBASE-25988) Store hfile list by plain file
[ https://issues.apache.org/jira/browse/HBASE-25988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25988: -- Component/s: HFile > Store hfile list by plain file > -- > > Key: HBASE-25988 > URL: https://issues.apache.org/jira/browse/HBASE-25988 > Project: HBase > Issue Type: Sub-task > Components: HFile >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25984) FSHLog WAL lockup with sync future reuse [RS deadlock]
[ https://issues.apache.org/jira/browse/HBASE-25984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360044#comment-17360044 ] Viraj Jasani commented on HBASE-25984: -- [~bharathv] I think you can create PR. Support to run QA with uploaded patch is no longer available IIRC. > FSHLog WAL lockup with sync future reuse [RS deadlock] > -- > > Key: HBASE-25984 > URL: https://issues.apache.org/jira/browse/HBASE-25984 > Project: HBase > Issue Type: Bug > Components: regionserver, wal >Affects Versions: 3.0.0-alpha-1, 1.7.0, 2.5.0, 2.4.5 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Critical > Labels: deadlock, hang > Attachments: HBASE-25984-unit-test.patch > > > We use FSHLog as the WAL implementation (branch-1 based) and under heavy load > we noticed the WAL system gets locked up due to a subtle bug involving racy > code with sync future reuse. This bug applies to all FSHLog implementations > across branches. > Symptoms: > On heavily loaded clusters with large write load we noticed that the region > servers are hanging abruptly with filled up handler queues and stuck MVCC > indicating appends/syncs not making any progress. > {noformat} > WARN [8,queue=9,port=60020] regionserver.MultiVersionConcurrencyControl - > STUCK for : 296000 millis. > MultiVersionConcurrencyControl{readPoint=172383686, writePoint=172383690, > regionName=1ce4003ab60120057734ffe367667dca} > WARN [6,queue=2,port=60020] regionserver.MultiVersionConcurrencyControl - > STUCK for : 296000 millis. > MultiVersionConcurrencyControl{readPoint=171504376, writePoint=171504381, > regionName=7c441d7243f9f504194dae6bf2622631} > {noformat} > All the handlers are stuck waiting for the sync futures and timing out. > {noformat} > java.lang.Object.wait(Native Method) > > org.apache.hadoop.hbase.regionserver.wal.SyncFuture.get(SyncFuture.java:183) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog.blockOnSync(FSHLog.java:1509) > . > {noformat} > Log rolling is stuck because it was unable to attain a safe point > {noformat} >java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) > org.apache.hadoop.hbase.regionserver.wal.FSHLog$SafePointZigZagLatch.waitSafePoint(FSHLog.java:1799) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog.replaceWriter(FSHLog.java:900) > {noformat} > and the Ring buffer consumer thinks that there are some outstanding syncs > that need to finish.. > {noformat} > > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.attainSafePoint(FSHLog.java:2031) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1999) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1857) > {noformat} > On the other hand, SyncRunner threads are idle and just waiting for work > implying that there are no pending SyncFutures that need to be run > {noformat} >sun.misc.Unsafe.park(Native Method) > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > > org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1297) > java.lang.Thread.run(Thread.java:748) > {noformat} > Overall the WAL system is dead locked and could make no progress until it was > aborted. I got to the bottom of this issue and have a patch that can fix it > (more details in the comments due to word limit in the description). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HBASE-25988) Store hfile list by plain file
[ https://issues.apache.org/jira/browse/HBASE-25988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-25988 started by Duo Zhang. - > Store hfile list by plain file > -- > > Key: HBASE-25988 > URL: https://issues.apache.org/jira/browse/HBASE-25988 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-25988) Store hfile list by plain file
[ https://issues.apache.org/jira/browse/HBASE-25988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-25988: - Assignee: Duo Zhang > Store hfile list by plain file > -- > > Key: HBASE-25988 > URL: https://issues.apache.org/jira/browse/HBASE-25988 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3312: HBASE-25923 Region state stuck in PENDING_OPEN
Apache-HBase commented on pull request #3312: URL: https://github.com/apache/hbase/pull/3312#issuecomment-857661023 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 6m 59s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-1 Compile Tests _ | | +1 :green_heart: | mvninstall | 9m 54s | branch-1 passed | | +1 :green_heart: | compile | 0m 40s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | compile | 0m 45s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | checkstyle | 1m 47s | branch-1 passed | | +1 :green_heart: | shadedjars | 3m 2s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | branch-1 passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 0m 40s | branch-1 passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +0 :ok: | spotbugs | 3m 4s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 1s | branch-1 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 50s | the patch passed | | +1 :green_heart: | compile | 0m 41s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javac | 0m 41s | the patch passed | | +1 :green_heart: | compile | 0m 45s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | javac | 0m 45s | the patch passed | | +1 :green_heart: | checkstyle | 1m 40s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 2m 52s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 4m 35s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2. | | +1 :green_heart: | javadoc | 0m 30s | the patch passed with JDK Azul Systems, Inc.-1.8.0_262-b19 | | +1 :green_heart: | javadoc | 0m 43s | the patch passed with JDK Azul Systems, Inc.-1.7.0_272-b10 | | +1 :green_heart: | findbugs | 2m 56s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 150m 12s | hbase-server in the patch failed. | | +1 :green_heart: | asflicense | 0m 42s | The patch does not generate ASF License warnings. | | | | 198m 31s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3312/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3312 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 67f15036683e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-3312/out/precommit/personality/provided.sh | | git revision | branch-1 / 0fd6eeb | | Default Java | Azul Systems, Inc.-1.7.0_272-b10 | | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:Azul Systems, Inc.-1.8.0_262-b19 /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_272-b10 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3312/2/artifact/out/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3312/2/testReport/ | | Max. process+thread count | 4733 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3312/2/console | | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Serv
[GitHub] [hbase] Apache-HBase commented on pull request #3297: HBASE-25905 Limit the shutdown time of WAL
Apache-HBase commented on pull request #3297: URL: https://github.com/apache/hbase/pull/3297#issuecomment-857633060 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 53s | master passed | | +1 :green_heart: | compile | 3m 12s | master passed | | +1 :green_heart: | checkstyle | 1m 2s | master passed | | +1 :green_heart: | spotbugs | 2m 4s | master passed | | -0 :warning: | patch | 2m 13s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 35s | the patch passed | | +1 :green_heart: | compile | 3m 5s | the patch passed | | +1 :green_heart: | javac | 3m 5s | the patch passed | | +1 :green_heart: | checkstyle | 1m 1s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 17m 58s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 2m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 14s | The patch does not generate ASF License warnings. | | | | 46m 29s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3297/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3297 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 26fb29b4a596 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 471e8159f0 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 95 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3297/3/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25981) JVM crash when displaying regionserver UI
[ https://issues.apache.org/jira/browse/HBASE-25981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17360002#comment-17360002 ] Hudson commented on HBASE-25981: Results for branch branch-2.3 [build #233 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/233/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/233/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/233/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/233/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/233/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 source release artifact{color} -- See build output for details. (x) {color:red}-1 client integration test{color} -- Something went wrong with this stage, [check relevant console output|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/233//console]. > JVM crash when displaying regionserver UI > - > > Key: HBASE-25981 > URL: https://issues.apache.org/jira/browse/HBASE-25981 > Project: HBase > Issue Type: Bug > Components: rpc, UI >Affects Versions: 3.0.0-alpha-1, 2.0.0 >Reporter: Xiaolin Ha >Assignee: Xiaolin Ha >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > Attachments: hs_err_pid116190.log-gha-data-hbase-cat0085-ui > > > The MonitoredRPCHandlerImpl refers to the params of a request, and will show > them when we call 'toJson()'. But the running RPC call may be cleaned up and > the ByteBuffer be released before the displaying in UI. We need to let the > life cycle of RPC status monitor be inner the life cycle of RPC. > {code:java} > J 19267 C2 > org.apache.hbase.thirdparty.com.google.protobuf.TextFormat$Printer.printMessage(Lorg/apache/hbase/thirdparty/com/google/protobuf/MessageOrBuilder;Lorg/apache/hbase/thirdparty/com/google/protobuf/TextFormat$TextGenerator;)V > (73 bytes) @ 0x7f1ac7e54640 [0x7f1ac7e53f60+0x6e0] > J 20932 C2 > org.apache.hbase.thirdparty.com.google.protobuf.TextFormat$Printer.print(Lorg/apache/hbase/thirdparty/com/google/protobuf/MessageOrBuilder;Lorg/apache/hbase/thirdparty/com/google/protobuf/TextFormat$TextGenerator;)V > (34 bytes) @ 0x7f1ac68ab9b0 [0x7f1ac68ab880+0x130] > J 21843 C1 > org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessage.toString()Ljava/lang/String; > (8 bytes) @ 0x7f1ac620e14c [0x7f1ac620dba0+0x5ac] > J 21835 C1 > org.apache.hadoop.hbase.monitoring.MonitoredRPCHandlerImpl.toMap()Ljava/util/Map; > (240 bytes) @ 0x7f1ac5009bf4 [0x7f1ac50071c0+0x2a34] > J 21833 C1 > org.apache.hadoop.hbase.monitoring.MonitoredRPCHandlerImpl.toJSON()Ljava/lang/String; > (5 bytes) @ 0x7f1ac74efb74 [0x7f1ac74efaa0+0xd4] > j > org.apache.hadoop.hbase.tmpl.common.TaskMonitorTmplImpl.renderNoFlush(Ljava/io/Writer;)V+259 > j > org.apache.hadoop.hbase.tmpl.common.TaskMonitorTmpl.renderNoFlush(Ljava/io/Writer;)V+16 > j > org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmplImpl.renderNoFlush(Ljava/io/Writer;)V+129 > {code} > [^hs_err_pid116190.log-gha-data-hbase-cat0085-ui] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] symat commented on pull request #3367: HBASE-25987 Make SSL keystore type configurable for HBase ThriftServer
symat commented on pull request #3367: URL: https://github.com/apache/hbase/pull/3367#issuecomment-857618922 thanks for the quick review, @wchevreuil ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] sunhelly commented on a change in pull request #3297: HBASE-25905 Limit the shutdown time of WAL
sunhelly commented on a change in pull request #3297: URL: https://github.com/apache/hbase/pull/3297#discussion_r648200987 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java ## @@ -986,14 +996,43 @@ public void shutdown() throws IOException { i.logCloseRequested(); } } -rollWriterLock.lock(); + +ExecutorService shutdownExecutor = Executors.newFixedThreadPool(1, Review comment: Hi, @pankaj72981 , I have merged the archiver and shutdown executor, please take a look, thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-25988) Store hfile list by plain file
Duo Zhang created HBASE-25988: - Summary: Store hfile list by plain file Key: HBASE-25988 URL: https://issues.apache.org/jira/browse/HBASE-25988 Project: HBase Issue Type: Sub-task Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #3366: HBASE-25985 ReplicationSourceWALReader#run - Reset sleepMultiplier in loop once out of any IOE
Apache-HBase commented on pull request #3366: URL: https://github.com/apache/hbase/pull/3366#issuecomment-857561696 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 4s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 0s | master passed | | +1 :green_heart: | compile | 1m 20s | master passed | | +1 :green_heart: | shadedjars | 9m 4s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 44s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 48s | the patch passed | | +1 :green_heart: | compile | 1m 19s | the patch passed | | +1 :green_heart: | javac | 1m 19s | the patch passed | | +1 :green_heart: | shadedjars | 9m 9s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 42s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 200m 41s | hbase-server in the patch passed. | | | | 235m 43s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3366/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3366 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 4063c98401b1 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 1654dcfbfb | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3366/1/testReport/ | | Max. process+thread count | 3127 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3366/1/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org