[jira] [Commented] (HBASE-27170) ByteBuffAllocator leak when decompressing blocks near minSizeForReservoirUse

2022-07-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562956#comment-17562956
 ] 

Hudson commented on HBASE-27170:


Results for branch branch-2.4
[build #383 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/383/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/383/General_20Nightly_20Build_20Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/383/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/383/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/383/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ByteBuffAllocator leak when decompressing blocks near minSizeForReservoirUse
> 
>
> Key: HBASE-27170
> URL: https://issues.apache.org/jira/browse/HBASE-27170
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-3, 2.4.12
> Environment: 2.4.6 with backported patches (can list if desired). 
> Running with temurinjdk11.0.12+7
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>  Labels: patch-available
> Fix For: 2.5.0, 3.0.0-alpha-4, 2.4.14
>
>
> Recently I started testing out BucketCache on some of our new hbase2 
> clusters. When BucketCache is enabled, it causes all disk reads to use 
> ByteBuffAllocator in an attempt to avoid heap allocations. Without 
> BucketCache enabled, even with ByteBuffAllocator enabled it will not be used 
> for disk reads.
> At first this was amazing, we had close to 0% heap allocations which 
> drastically reduces CPU and GC time. Over time I noticed that the 
> ByteBuffAllocator pool filled up, and at that point all allocations come from 
> the heap and our heap allocation % goes to 100%.
> We were using default max buffer count, which was 4096 for a smaller host and 
> 7680 for a larger one. At first I figured we just needed more buffers, so I 
> upped it to 120k. It took longer, but still eventually exhausts the pool.
> This does not cause an OOM, because I made sure to allocate enough direct 
> memory for max buffer count * buffer size (65k). It just causes the 
> usedBufCount to exceed maxBufCount, resulting in 100% heap allocations going 
> forward. It never recovers from this state until the server is restarted.
> Some early observations:
>  * Running a major compaction causes a drastic up-tick in used buffers. Major 
> compacting a 1.5GB region could easily expand the usedBufCount by ~10,000. 
> Most of those are never recovered. I could keep compacting the same region 
> over and over, each time increasing the usedBufCount by 5000-15000, until max 
> is reached.
>  * Despite usedBufCount increasing, direct memory usage largely does not 
> increase. This indicates to me that the DirectByteBuffers are being reclaimed 
> by GC, but the Recycler is not being called.
>  ** This was confirmed with a heap dump, which showed really no obvious leak 
> in the typical sense. There were very very few DirectByteBuffers with 
> capacity 66560.
>  * Enabling BucketCache triggers use of ByteBuffAllocator, but I don't think 
> it's related to the actual problem. First of all, compactions skip the cache 
> so that would be odd. Secondly, I disabled hbase.block.data.cacheonread and 
> all other CacheConfig, so the BucketCache is going unused (0 bytes used) and 
> the problem still persists.
> Digging deeper, I instrumented ByteBuffAllocator in two ways:
>  # I added some trace loggings in getBuffer() and putbackBuffer() so I could 
> see the number of allocations, returns, pool size at the time, and a 
> stacktrace.
>  # I added netty's ResourceLeakDetector to the RefCnt class we use in 
> ByteBuffAllocator. Calling track() on creation of a RefCnt, record() in 
> RefCnt.retain(), and close() in RefCnt.deallocate().
> The ResourceLeakDetector immediately picked up leaks on start of the 
> regionserver. I saw leaks coming from both user requests and, as expected, 
> the Compactor. 
> I collected 21 unique LEAK stacktraces, which is too many to list. But all of 
> them had 2 

[GitHub] [hbase] Apache9 commented on a diff in pull request #4556: HBASE-26913 Replication Observability Framework

2022-07-05 Thread GitBox


Apache9 commented on code in PR #4556:
URL: https://github.com/apache/hbase/pull/4556#discussion_r914434308


##
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationMarkerChore.java:
##
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import static 
org.apache.hadoop.hbase.replication.master.ReplicationSinkTrackerTableCreator.REPLICATION_SINK_TRACKER_TABLE_NAME;
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.RegionInfoBuilder;
+import org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.regionserver.wal.WALUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.wal.WAL;
+import org.apache.hadoop.hbase.wal.WALEdit;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * This chore is responsible to create replication marker rows with special 
WALEdit with family as
+ * {@link org.apache.hadoop.hbase.wal.WALEdit#METAFAMILY} and column qualifier 
as
+ * {@link WALEdit#REPLICATION_MARKER} and empty value. If config key
+ * {@link #REPLICATION_MARKER_ENABLED_KEY} is set to true, then we will create 
1 marker row every
+ * {@link #REPLICATION_MARKER_CHORE_DURATION_KEY} ms
+ * {@link 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader} 
will populate
+ * the Replication Marker edit with region_server_name, wal_name and 
wal_offset encoded in
+ * {@link 
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.ReplicationMarkerDescriptor}
+ * object. {@link 
org.apache.hadoop.hbase.replication.regionserver.Replication} will change the
+ * REPLICATION_SCOPE for this edit to GLOBAL so that it can replicate. On the 
sink cluster,
+ * {@link org.apache.hadoop.hbase.replication.regionserver.ReplicationSink} 
will convert the
+ * ReplicationMarkerDescriptor into a Put mutation to 
REPLICATION_SINK_TRACKER_TABLE_NAME_STR table.
+ */
+@InterfaceAudience.Private
+public class ReplicationMarkerChore extends ScheduledChore {
+  private static final Logger LOG = 
LoggerFactory.getLogger(ReplicationMarkerChore.class);
+  private static final MultiVersionConcurrencyControl MVCC = new 
MultiVersionConcurrencyControl();
+  public static final RegionInfo REGION_INFO =
+RegionInfoBuilder.newBuilder(REPLICATION_SINK_TRACKER_TABLE_NAME).build();
+  private static final String DELIMITER = "_";
+  private final Configuration conf;
+  private final RegionServerServices rsServices;
+  private WAL wal;
+
+  public static final String REPLICATION_MARKER_ENABLED_KEY =
+"hbase.regionserver.replication.marker.enabled";
+  public static final boolean REPLICATION_MARKER_ENABLED_DEFAULT = false;
+
+  public static final String REPLICATION_MARKER_CHORE_DURATION_KEY =
+"hbase.regionserver.replication.marker.chore.duration";
+  public static final int REPLICATION_MARKER_CHORE_DURATION_DEFAULT = 30 * 
1000; // 30 seconds
+
+  public ReplicationMarkerChore(final Stoppable stopper, final 
RegionServerServices rsServices,
+int period, Configuration conf) {
+super("ReplicationTrackerChore", stopper, period);
+this.conf = conf;
+this.rsServices = rsServices;
+  }
+
+  @Override
+  protected void chore() {
+if (wal == null) {
+  try {
+wal = rsServices.getWAL(null);
+  } catch (IOException ioe) {
+LOG.warn("Unable to get WAL ", ioe);
+// Shouldn't happen. Ignore and wait for the next chore run.
+return;
+  }
+}
+String serverName = rsServices.getServerName().getServerName();
+long timeStamp = EnvironmentEdgeManager.currentTime();
+// We only have timestamp in ReplicationMarkerDescriptor and the remaining 
properties walname,
+// regionserver name and wal offset at 

[GitHub] [hbase] comnetwork commented on a diff in pull request #4595: HBASE-26950 Use AsyncConnection in ReplicationSink

2022-07-05 Thread GitBox


comnetwork commented on code in PR #4595:
URL: https://github.com/apache/hbase/pull/4595#discussion_r914434283


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionUtils.java:
##
@@ -619,4 +615,13 @@ static void updateStats(Optional 
optStats,
 metrics -> ResultStatsUtil.updateStats(metrics, serverName, 
regionName, regionLoadStats));
 });
   }
+
+  public static AsyncConnection createAsyncConnection(Configuration conf) 
throws IOException {
+return FutureUtils.get(ConnectionFactory.createAsyncConnection(conf));
+  }

Review Comment:
   @bbeaudreault , thank you for review. Ok,I  think we could just use 
`ConnectionFactory.createAsyncConnection(conf).get() `in ReplicationSink, 
because seems there is no other code could benefit from this. I have updated 
the code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] comnetwork commented on a diff in pull request #4595: HBASE-26950 Use AsyncConnection in ReplicationSink

2022-07-05 Thread GitBox


comnetwork commented on code in PR #4595:
URL: https://github.com/apache/hbase/pull/4595#discussion_r91440


##
hbase-server/src/test/java/org/apache/hadoop/hbase/client/DummyAsyncTable.java:
##
@@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Function;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.filter.Filter;
+
+/**
+ * Can be overridden in UT if you only want to implement part of the methods 
in {@link AsyncTable}.
+ */
+public class DummyAsyncTable implements 
AsyncTable {

Review Comment:
   This `DummyAsyncTable` is used for 
`TestWALEntrySinkFilter.DevNullAsyncConnection.getTable` and is backported from 
master.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27148) Move minimum hadoop 3 support version to 3.2.3

2022-07-05 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562951#comment-17562951
 ] 

Duo Zhang commented on HBASE-27148:
---

Will open another PR for branch-2.x, since we still need to support hadoop 2.x 
there.

> Move minimum hadoop 3  support version to 3.2.3
> ---
>
> Key: HBASE-27148
> URL: https://issues.apache.org/jira/browse/HBASE-27148
> Project: HBase
>  Issue Type: Task
>  Components: dependencies, hadoop3, security
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-4
>
>
> Seems the hadoop community will not make newer 3.1.x release so let's move 
> the minimun hadoop 3 versions to 3.2.3, due to security issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27157) Potential race condition in WorkerAssigner

2022-07-05 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562950#comment-17562950
 ] 

Duo Zhang commented on HBASE-27157:
---

Pushed to master and branch-2.

Mind posting a PR for branch-2.5 and branch-2.4?

The code is different so the PR for master can not be applied to branch-2.5 and 
branch-2.4 cleanly.

Thanks.

> Potential race condition in WorkerAssigner
> --
>
> Key: HBASE-27157
> URL: https://issues.apache.org/jira/browse/HBASE-27157
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Affects Versions: 2.4.12
>Reporter: ruanhui
>Assignee: ruanhui
>Priority: Minor
> Fix For: 3.0.0-alpha-4
>
>
> Multiple SplitWALProcedures share the same WorkerAssigner instance, so there 
> is potential race condition because the suspend and the wake method are not 
> synchronized.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache9 merged pull request #4561: HBASE-27148 Move minimum hadoop 3 support version to 3.2.3

2022-07-05 Thread GitBox


Apache9 merged PR #4561:
URL: https://github.com/apache/hbase/pull/4561


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (HBASE-27171) Fix Annotation Error in HRegionFileSystem

2022-07-05 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-27171.
---
Fix Version/s: 2.5.0
   3.0.0-alpha-4
   2.4.14
 Hadoop Flags: Reviewed
   Resolution: Fixed

Pushed to branch-2.4+.

Thanks [~tangtianhang] for contributing!

> Fix Annotation Error in HRegionFileSystem
> -
>
> Key: HBASE-27171
> URL: https://issues.apache.org/jira/browse/HBASE-27171
> Project: HBase
>  Issue Type: Bug
>Reporter: tianhang tang
>Assignee: tianhang tang
>Priority: Trivial
> Fix For: 2.5.0, 3.0.0-alpha-4, 2.4.14
>
>
> The annotation of setStoragePolicy:
> {code:java}
> See see hadoop 2.6+ ... {code}
> should be 
> {code:java}
> See hadoop 2.6+ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-23330) Expose cluster ID for clients using it for delegation token based auth

2022-07-05 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-23330.
---
Fix Version/s: 2.5.0
   2.4.14
   (was: 2.3.0)
 Hadoop Flags: Reviewed
   Resolution: Fixed

>   Expose cluster ID for clients using it for delegation token based auth
> 
>
> Key: HBASE-23330
> URL: https://issues.apache.org/jira/browse/HBASE-23330
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master
>Affects Versions: 3.0.0-alpha-1
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
> Fix For: 2.5.0, 2.4.14, 1.7.0, 3.0.0-alpha-1
>
>
> As Gary Helming noted in HBASE-18095, some clients use Cluster ID for 
> delgation based auth. 
> {quote}
> There is an additional complication here for token-based authentication. When 
> a delegation token is used for SASL authentication, the client uses the 
> cluster ID obtained from Zookeeper to select the token identifier to use. So 
> there would also need to be some Zookeeper-less, unauthenticated way to 
> obtain the cluster ID as well.
> {quote}
> Once we move ZK out of the picture, cluster ID sits behind an end point that 
> needs to be authenticated. Figure out a way to expose this to clients.
> One suggestion in the comments (from Andrew)
> {quote}
>  Cluster ID lookup is most easily accomplished with a new servlet on the 
> HTTP(S) endpoint on the masters, serving the cluster ID as plain text. It 
> can't share the RPC server endpoint when SASL is enabled because any 
> interaction with that endpoint must be authenticated. This is ugly but 
> alternatives seem worse. One alternative would be a second RPC port for APIs 
> that do not / cannot require prior authentication.
> {quote}
> There could be implications if SPNEGO is enabled on these http(s) end points. 
> We need to make sure that it is handled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache9 merged pull request #4577: HBASE-27157 Potential race condition in WorkerAssigner

2022-07-05 Thread GitBox


Apache9 merged PR #4577:
URL: https://github.com/apache/hbase/pull/4577


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 merged pull request #4588: HBASE-27171 Fix Annotation Error in HRegionFileSystem

2022-07-05 Thread GitBox


Apache9 merged PR #4588:
URL: https://github.com/apache/hbase/pull/4588


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


Apache9 commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175708383

   Thanks @normanmaurer . On the wrap handler, we decide to just extend 
MessageToByteEncoder so we do not need to handle the release by our own. The 
NettyRpcServer related changes will be included in #4596 .
   
   For other parts, let's land it using the new issue, HBASE-27180.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27153) Improvements to read-path tracing

2022-07-05 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562923#comment-17562923
 ] 

Andrew Kyle Purtell commented on HBASE-27153:
-

We can indeed.

> Improvements to read-path tracing
> -
>
> Key: HBASE-27153
> URL: https://issues.apache.org/jira/browse/HBASE-27153
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability, regionserver
>Affects Versions: 2.5.0, 3.0.0-alpha-2, 2.6.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-4
>
>
> Take another pass through tracing of the read path, make adjustments 
> accordingly. One of the major concerns raised previously is that we create a 
> span for every block access. Start by simplifying this to trace events and 
> see what else comes up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27155) Improvements to low level scanner tracing

2022-07-05 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562920#comment-17562920
 ] 

Andrew Kyle Purtell commented on HBASE-27155:
-

Hey [~ndimiduk] thank you so much for taking a look at this. 

> Improvements to low level scanner tracing
> -
>
> Key: HBASE-27155
> URL: https://issues.apache.org/jira/browse/HBASE-27155
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners, tracing
>Reporter: Andrew Kyle Purtell
>Priority: Major
>
> Related to HBASE-27153, consider tracer semantic attributes for low level 
> scanner details.
> Consider 
> https://issues.apache.org/jira/secure/attachment/13006571/W-7665966-Instrument-low-level-scan-details-branch-2.2.patch
>  (from HBASE-24637).  This was used to collect detailed metrics of the 
> decisions of ScanQueryMatcher and related classes.
> {noformat}
> metrics: [ "block_read_keys": 477 "block_read_ns": 3427040
>  "block_reads": 13 "block_seek_ns": 1606370 "block_seeks": 169
>  "block_unpack_ns": 10256 "block_unpacks": 13 "cells_matched": 165
>  "cells_matched__hbase:meta,,1.1588230740__info": 165
>  "column_hint_include": 148 "memstore_next": 72
>  "memstore_next_ns": 136671 "memstore_seek": 2
>  "memstore_seek_ns": 631629 "reseeks": 36 "sqm_hint_done": 17
>  "sqm_hint_include": 74 "sqm_hint_seek_next_col": 74
>  "store_next": 276
>  "store_next__1c930a35ff8041368a05817adbdcce97": 40
>  "store_next__2644194fdf794815abdc940c183dab88": 40
>  "store_next__32ce31753fb244668f788fb94ab02dff": 40
>  "store_next__61c8423b9d8846c99a61cd2996b5b621": 116
>  "store_next__f4f7878c9fcf40d9902416d5c7a4097a": 40
>  "store_next_ns": 1891634
>  "store_next_ns__1c930a35ff8041368a05817adbdcce97": 269383
>  "store_next_ns__2644194fdf794815abdc940c183dab88": 299936
>  "store_next_ns__32ce31753fb244668f788fb94ab02dff": 288594
>  "store_next_ns__61c8423b9d8846c99a61cd2996b5b621": 594313
>  "store_next_ns__f4f7878c9fcf40d9902416d5c7a4097a": 439408
>  "store_reseek": 164
>  "store_reseek__1c930a35ff8041368a05817adbdcce97": 32
>  "store_reseek__2644194fdf794815abdc940c183dab88": 32
>  "store_reseek__32ce31753fb244668f788fb94ab02dff": 32
>  "store_reseek__61c8423b9d8846c99a61cd2996b5b621": 36
>  "store_reseek__f4f7878c9fcf40d9902416d5c7a4097a": 32
>  "store_reseek_ns": 2969978
>  "store_reseek_ns__1c930a35ff8041368a05817adbdcce97": 359489
>  "store_reseek_ns__2644194fdf794815abdc940c183dab88": 595115
>  "store_reseek_ns__32ce31753fb244668f788fb94ab02dff": 474642
>  "store_reseek_ns__61c8423b9d8846c99a61cd2996b5b621": 1013188
>  "store_reseek_ns__f4f7878c9fcf40d9902416d5c7a4097a": 527544
>  "store_seek": 5
>  "store_seek__1c930a35ff8041368a05817adbdcce97": 1
>  "store_seek__2644194fdf794815abdc940c183dab88": 1
>  "store_seek__32ce31753fb244668f788fb94ab02dff": 1
>  "store_seek__61c8423b9d8846c99a61cd2996b5b621": 1
>  "store_seek__f4f7878c9fcf40d9902416d5c7a4097a": 1
>  "store_seek_ns": 8862786
>  "store_seek_ns__1c930a35ff8041368a05817adbdcce97": 830421
>  "store_seek_ns__2644194fdf794815abdc940c183dab88": 585899
>  "store_seek_ns__32ce31753fb244668f788fb94ab02dff": 483605
>  "store_seek_ns__61c8423b9d8846c99a61cd2996b5b621": 5958072
>  "store_seek_ns__f4f7878c9fcf40d9902416d5c7a4097a": 1004789
>  "versions_hint_include": 74 "versions_hint_seek_next_col": 74 ]
> {noformat}
> We can see the differences between seek time and reseek time and we get the 
> counts for same, so we can analyze if SQM is making optimal choices (or less 
> optimal choices) or not, or if behavior has changed; and we can identify 
> particular store file(s) that might be outliers for some reason when hunting 
> for sources of regression. We get the time required to unpack blocks (on 
> average). We get a count of hints supplied by base SQM functionality or 
> filters. We get the relative contributions of query processing time 
> separately from memstore and store files. 
> Perhaps this can be done conditionally for scans that are selected for 
> tracing. Of course there is a performance concern, so it must be done such 
> that the overheads really are conditional on if the path is being actively 
> traced, and measured carefully to decide if it should be committed or not. 
> WDYT [~ndimiduk] [~zhangduo]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175681019

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m  2s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 47s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 23s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 42s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 42s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m  4s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 49s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 44s |  hbase-asyncfs in the patch passed. 
 |
   | +1 :green_heart: |  unit  | 230m 11s |  hbase-server in the patch passed.  
|
   |  |   | 260m  4s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4597 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 2eec32521615 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/2/testReport/
 |
   | Max. process+thread count | 2376 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-asyncfs hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175654479

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 19s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   3m 55s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 13s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   3m 52s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 25s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m  6s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 56s |  hbase-asyncfs in the patch passed. 
 |
   | +1 :green_heart: |  unit  | 201m 35s |  hbase-server in the patch passed.  
|
   |  |   | 225m  1s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4597 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 5beacbd5f166 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/2/testReport/
 |
   | Max. process+thread count | 2659 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-asyncfs hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] virajjasani commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


virajjasani commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1175623551

   > Especially, the above failures are in the compaction thread pool, not in 
the rpc handler threads, which are not affected by the changes here too.
   
   I wanted to get confirmation of the same. Sounds good, thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


Apache9 commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1175623535

   Tried the 3 failed UTs locally
   
   TestClusterRestartFailoverSplitWithoutZk
   TestZKSecretWatcher
   TestFromClientSideWithCoprocessor5
   
   All passed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 merged pull request #4598: HBASE-23330: Fix delegation token fetch with MasterRegistry (#1084)

2022-07-05 Thread GitBox


Apache9 merged PR #4598:
URL: https://github.com/apache/hbase/pull/4598


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #4598: HBASE-23330: Fix delegation token fetch with MasterRegistry (#1084)

2022-07-05 Thread GitBox


Apache9 commented on PR #4598:
URL: https://github.com/apache/hbase/pull/4598#issuecomment-1175619189

   OK, good, all green. It is just a cherry-pick, let me merge...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


Apache9 commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1175618564

   > I think we have some other issues with the server side changes, I see 
multiple compaction failures:
   > 
   > ```
   > 2022-07-05 19:00:06,828 WARN  [20-longCompactions-0] hdfs.DFSClient - 
Failed to connect to dn1/10.118.172.92:50010 for block 
BP-1698803826-10.118.165.103-1656451448670:blk_1074099523_359025, add to 
deadNodes and continue. 
   > java.net.ConnectException: Connection refused
   >at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
   >at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
   >at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
   >at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:539)
   >at 
org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2913)
   >at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:851)
   >at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:753)
   >at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:387)
   >at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:843)
   >at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:772)
   >at 
org.apache.hadoop.hdfs.DFSInputStream.seekToBlockSource(DFSInputStream.java:1825)
   >at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:1035)
   >at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:1074)
   >at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1138)
   >at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:148)
   >at 
org.apache.hadoop.hbase.io.util.BlockIOUtils.readWithExtra(BlockIOUtils.java:180)
   >at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1450)
   >at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1681)
   >at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1492)
   >at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1308)
   >at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readNextDataBlock(HFileReaderImpl.java:739)
   >at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$EncodedScanner.next(HFileReaderImpl.java:1480)
   >at 
org.apache.hadoop.hbase.io.HalfStoreFileReader$1.next(HalfStoreFileReader.java:140)
   >at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:194)
   >at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:112)
   >at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:677)
   >at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:417)
   >at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:345)
   >at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
   >at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:122)
   >at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1464)
   >at 
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2286)
   >at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:618)
   >at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:666)
   >at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   >at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   >at java.lang.Thread.run(Thread.java:750)
   > ```
   > 
   > ```
   > 2022-07-05 18:59:43,010 WARN  [20-longCompactions-0] 
impl.BlockReaderFactory - I/O error constructing remote block reader.
   > java.net.ConnectException: Connection refused
   >at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
   >at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
   >at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
   >at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:539)
   >at 
org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2913)
   >at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:851)
   >at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:753)
   >at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:387)
   >at 

[GitHub] [hbase] Apache9 commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


Apache9 commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1175615933

   The failures for JDK8 and JDK11 are different, so usually they should not be 
related, just flakies. I will also try them locally.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] virajjasani commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


virajjasani commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1175610716

   I am going to retest the changes on the cluster, maybe some other issues 
prevented the actual testing. In the meantime, @Apache9 could you please verify 
UT failures are not relevant?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] virajjasani commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


virajjasani commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175563809

   Unfortunately this is also not working. I see RPC processing errors:
   ```
   2022-07-05 22:17:32,603 ERROR [,queue=18,port=6] ipc.RpcServer - 
Unexpected throwable object 
   java.lang.NullPointerException
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClusterStatusProtos$ServerTask$Builder.setStatus(ClusterStatusProtos.java:14323)
at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toServerTask(ProtobufUtil.java:3558)
at 
org.apache.hadoop.hbase.ClusterMetricsBuilder.lambda$toClusterStatus$4(ClusterMetricsBuilder.java:78)
at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
at 
org.apache.hadoop.hbase.ClusterMetricsBuilder.toClusterStatus(ClusterMetricsBuilder.java:78)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:986)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:384)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:371)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:351)
   ```
   
   And yet I still see leaks:
   ```
   2022-07-05 22:23:53,649 ERROR [S-EventLoopGroup-1-4] 
util.ResourceLeakDetector - 
ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:541)

org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)

org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:277)

org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)

org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)

org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)

org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)

org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)

org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)

org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)

org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)

org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)

org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)

org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)

org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:750)
   : 23 leak records were discarded because they were duplicates
   ```
   
   ```
   2022-07-05 22:23:51,650 ERROR [-EventLoopGroup-1-11] 
util.ResourceLeakDetector - oll.EpollEventLoop.run(EpollEventLoop.java:378)

org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)

org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:750)
   Created at:

org.apache.hbase.thirdparty.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:401)


[GitHub] [hbase] Apache-HBase commented on pull request #4589: HBASE-27172 Upgrade OpenTelemetry dependency to 1.15.0

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4589:
URL: https://github.com/apache/hbase/pull/4589#issuecomment-1175556755

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 40s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 13s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   3m 51s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   3m 50s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   1m 18s |  root generated 2 new + 81 unchanged 
- 3 fixed = 83 total (was 84)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 395m 35s |  root in the patch failed.  |
   |  |   | 416m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4589/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4589 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux fc2300d79e53 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javadoc | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4589/2/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-root.txt
 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4589/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4589/2/testReport/
 |
   | Max. process+thread count | 2776 (vs. ulimit of 3) |
   | modules | C: hbase-assembly . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4589/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175541926

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 43s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 22s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 37s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 42s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 34s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 35s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 35s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  11m 55s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | -1 :x: |  spotless  |   0m 33s |  patch has 21 errors when running 
spotless:check, run spotless:apply to fix.  |
   | +1 :green_heart: |  spotbugs  |   2m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  39m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4597 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux a381845bfa2e 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | spotless | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/2/artifact/yetus-general-check/output/patch-spotless.txt
 |
   | Max. process+thread count | 60 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-asyncfs hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1175540074

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  0s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  7s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 42s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 57s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 48s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  | 262m 16s |  hbase-server in the patch failed.  |
   |  |   | 289m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4596 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 7537ffcb6eae 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/3/testReport/
 |
   | Max. process+thread count | 2411 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/3/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] virajjasani commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


virajjasani commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175510987

   > Let me remove touch references and try this patch.
   
   Oops my bad. I didn't have the latest patch from @bbeaudreault 
(https://github.com/apache/hbase/commit/f36b8eadd8aed84029ca4843c6822e662fec147a)
 on 2.4 version that I am testing the cluster with.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175507541

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 49s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 58s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 49s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 45s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 11s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 11s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 29s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 32s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   2m 15s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 44s |  hbase-asyncfs in the patch passed. 
 |
   | -1 :x: |  unit  | 274m  1s |  hbase-server in the patch failed.  |
   |  |   | 312m 14s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4597 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 8257ea7e058f 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/1/testReport/
 |
   | Max. process+thread count | 2428 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-asyncfs hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4591: Backport "HBASE-27172 Upgrade OpenTelemetry dependency to 1.15.0" to branch-2.5

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4591:
URL: https://github.com/apache/hbase/pull/4591#issuecomment-1175504107

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  5s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.5 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 24s |  branch-2.5 passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  branch-2.5 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 37s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  branch-2.5 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   3m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 333m 27s |  root in the patch passed.  |
   |  |   | 355m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4591/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4591 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 5d44cb1258af 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / e3963458b1 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4591/2/testReport/
 |
   | Max. process+thread count | 3906 (vs. ulimit of 12500) |
   | modules | C: hbase-assembly . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4591/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4598: HBASE-23330: Fix delegation token fetch with MasterRegistry (#1084)

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4598:
URL: https://github.com/apache/hbase/pull/4598#issuecomment-1175500871

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 47s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m 55s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 53s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 55s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   3m 50s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 54s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 195m 29s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |  14m 23s |  hbase-mapreduce in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   7m 39s |  hbase-thrift in the patch passed.  
|
   |  |   | 244m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4598/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4598 |
   | JIRA Issue | HBASE-23330 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 54136a101c1d 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 57a3781160 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4598/1/testReport/
 |
   | Max. process+thread count | 2711 (vs. ulimit of 12500) |
   | modules | C: hbase-client hbase-server hbase-mapreduce hbase-thrift U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4598/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] virajjasani commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


virajjasani commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175494523

   Let me remove touch references and try this patch.
   The reason why these netty fixes are difficult to test is because they need 
to be deployed to dedicated secure test cluster and we don't even have remote 
debug enabled on these clusters, hence it takes time to test even small 
changes. But these leaks seem to be relevant to SASL case mostly. Let me try 
reverting touch() calls from this patch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] virajjasani commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


virajjasani commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175493007

   Just finished testing, I think we have a problem here with 
SingleByteBuf#touch:
   
   ```
   2022-07-05 20:55:56,058 ERROR [0:becomeActiveMaster] master.HMaster - Failed 
to become active master
   java.io.IOException: java.io.IOException: 
org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile 
Trailer from file 
hdfs://dev5b/hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0be4141cae78432c8e5b440cd07d27a2
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1157)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1100)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:995)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:945)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7919)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7880)
at 
org.apache.hadoop.hbase.master.region.MasterRegion.open(MasterRegion.java:245)
at 
org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:323)
at 
org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:851)
at 
org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2184)
at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:525)
at java.lang.Thread.run(Thread.java:750)
   Caused by: java.io.IOException: 
org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile 
Trailer from file 
hdfs://dev5b/hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0be4141cae78432c8e5b440cd07d27a2
at 
org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:557)
at 
org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:515)
at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:283)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:6394)
at 
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1123)
at 
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1120)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
   Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
reading HFile Trailer from file 
hdfs://dev5b/hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0be4141cae78432c8e5b440cd07d27a2
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:501)
at 
org.apache.hadoop.hbase.regionserver.StoreFileReader.(StoreFileReader.java:96)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.createReader(StoreFileInfo.java:292)
at 
org.apache.hadoop.hbase.regionserver.HStoreFile.open(HStoreFile.java:359)
at 
org.apache.hadoop.hbase.regionserver.HStoreFile.initReader(HStoreFile.java:477)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:695)
at 
org.apache.hadoop.hbase.regionserver.HStore.lambda$openStoreFiles$1(HStore.java:535)
... 6 more
   Caused by: java.lang.UnsupportedOperationException
at org.apache.hadoop.hbase.nio.RefCnt.touch(RefCnt.java:61)
at 
org.apache.hbase.thirdparty.io.netty.util.AbstractReferenceCounted.touch(AbstractReferenceCounted.java:71)
at 
org.apache.hadoop.hbase.nio.SingleByteBuff.(SingleByteBuff.java:68)
at 
org.apache.hadoop.hbase.nio.SingleByteBuff.(SingleByteBuff.java:55)
at 
org.apache.hadoop.hbase.nio.SingleByteBuff.(SingleByteBuff.java:51)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$PrefetchedHeader.(HFileBlock.java:1302)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$PrefetchedHeader.(HFileBlock.java:1299)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.(HFileBlock.java:1330)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.(HFileReaderImpl.java:141)
at 
org.apache.hadoop.hbase.io.hfile.HFilePreadReader.(HFilePreadReader.java:36)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:493)
... 12 more
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and 

[GitHub] [hbase] Apache-HBase commented on pull request #4598: HBASE-23330: Fix delegation token fetch with MasterRegistry (#1084)

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4598:
URL: https://github.com/apache/hbase/pull/4598#issuecomment-1175488735

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 27s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m 37s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 38s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 13s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   3m 38s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 31s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 185m 57s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |  14m  2s |  hbase-mapreduce in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   6m 53s |  hbase-thrift in the patch passed.  
|
   |  |   | 230m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4598/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4598 |
   | JIRA Issue | HBASE-23330 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 7c463bf6015f 5.4.0-96-generic #109-Ubuntu SMP Wed Jan 12 
16:49:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 57a3781160 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4598/1/testReport/
 |
   | Max. process+thread count | 2504 (vs. ulimit of 12500) |
   | modules | C: hbase-client hbase-server hbase-mapreduce hbase-thrift U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4598/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1175484308

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 19s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 50s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   3m 49s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   3m 49s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 57s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  | 203m 20s |  hbase-server in the patch failed.  |
   |  |   | 222m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4596 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux d466f8d57c7a 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/3/testReport/
 |
   | Max. process+thread count | 3069 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/3/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4589: HBASE-27172 Upgrade OpenTelemetry dependency to 1.15.0

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4589:
URL: https://github.com/apache/hbase/pull/4589#issuecomment-1175468747

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 58s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 40s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 51s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   3m 43s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 51s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 51s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   3m 40s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 279m  1s |  root in the patch passed.  |
   |  |   | 307m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4589/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4589 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 7dd5fba73820 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4589/2/testReport/
 |
   | Max. process+thread count | 4951 (vs. ulimit of 3) |
   | modules | C: hbase-assembly . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4589/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] virajjasani commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


virajjasani commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175464393

   I am already running hbase with `PARANOID` level for quite some time. In 
fact, I added touch() calls at few places in decoder side and yet I could not 
see a single leak detector trace coming from hbase, it was only coming from 
Netty side. But yes I have dedicated cluster running in SASL secure mode that 
has paranoid level turned on (it's test cluster but we ingest pretty heavy load 
using Phoenix).
   More details available on HBASE-26708


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] normanmaurer commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


normanmaurer commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175461980

   @jojochuang +1 just be aware that this is very expensive and will take a lot 
of time :) That said we do this as part of our Netty test suite as well to 
ensure we not leak.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] jojochuang commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


jojochuang commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175456757

   Would be nice to run in Netty paranoid leak detection mode
   https://netty.io/wiki/reference-counted-objects.html#leak-detection-levels
   -Dio.netty.leakDetection.level=paranoid


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4591: Backport "HBASE-27172 Upgrade OpenTelemetry dependency to 1.15.0" to branch-2.5

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4591:
URL: https://github.com/apache/hbase/pull/4591#issuecomment-1175451852

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.5 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 37s |  branch-2.5 passed  |
   | +1 :green_heart: |  compile  |   1m 47s |  branch-2.5 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 49s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  branch-2.5 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 47s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 47s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   3m 47s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 262m 55s |  root in the patch failed.  |
   |  |   | 287m 17s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4591/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4591 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 9529222b1827 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / e3963458b1 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4591/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4591/2/testReport/
 |
   | Max. process+thread count | 2653 (vs. ulimit of 12500) |
   | modules | C: hbase-assembly . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4591/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4597: HBASE-26708 Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175438371

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 22s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   3m 55s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 16s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   3m 52s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 23s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m  7s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 56s |  hbase-asyncfs in the patch passed. 
 |
   | +1 :green_heart: |  unit  | 202m 14s |  hbase-server in the patch passed.  
|
   |  |   | 225m 28s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4597 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux fe5895e3f30d 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/1/testReport/
 |
   | Max. process+thread count | 3260 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-asyncfs hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27170) ByteBuffAllocator leak when decompressing blocks near minSizeForReservoirUse

2022-07-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562823#comment-17562823
 ] 

Hudson commented on HBASE-27170:


Results for branch branch-2.5
[build #157 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/157/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/157/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/157/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/157/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/157/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ByteBuffAllocator leak when decompressing blocks near minSizeForReservoirUse
> 
>
> Key: HBASE-27170
> URL: https://issues.apache.org/jira/browse/HBASE-27170
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-3, 2.4.12
> Environment: 2.4.6 with backported patches (can list if desired). 
> Running with temurinjdk11.0.12+7
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>  Labels: patch-available
> Fix For: 2.5.0, 3.0.0-alpha-4, 2.4.14
>
>
> Recently I started testing out BucketCache on some of our new hbase2 
> clusters. When BucketCache is enabled, it causes all disk reads to use 
> ByteBuffAllocator in an attempt to avoid heap allocations. Without 
> BucketCache enabled, even with ByteBuffAllocator enabled it will not be used 
> for disk reads.
> At first this was amazing, we had close to 0% heap allocations which 
> drastically reduces CPU and GC time. Over time I noticed that the 
> ByteBuffAllocator pool filled up, and at that point all allocations come from 
> the heap and our heap allocation % goes to 100%.
> We were using default max buffer count, which was 4096 for a smaller host and 
> 7680 for a larger one. At first I figured we just needed more buffers, so I 
> upped it to 120k. It took longer, but still eventually exhausts the pool.
> This does not cause an OOM, because I made sure to allocate enough direct 
> memory for max buffer count * buffer size (65k). It just causes the 
> usedBufCount to exceed maxBufCount, resulting in 100% heap allocations going 
> forward. It never recovers from this state until the server is restarted.
> Some early observations:
>  * Running a major compaction causes a drastic up-tick in used buffers. Major 
> compacting a 1.5GB region could easily expand the usedBufCount by ~10,000. 
> Most of those are never recovered. I could keep compacting the same region 
> over and over, each time increasing the usedBufCount by 5000-15000, until max 
> is reached.
>  * Despite usedBufCount increasing, direct memory usage largely does not 
> increase. This indicates to me that the DirectByteBuffers are being reclaimed 
> by GC, but the Recycler is not being called.
>  ** This was confirmed with a heap dump, which showed really no obvious leak 
> in the typical sense. There were very very few DirectByteBuffers with 
> capacity 66560.
>  * Enabling BucketCache triggers use of ByteBuffAllocator, but I don't think 
> it's related to the actual problem. First of all, compactions skip the cache 
> so that would be odd. Secondly, I disabled hbase.block.data.cacheonread and 
> all other CacheConfig, so the BucketCache is going unused (0 bytes used) and 
> the problem still persists.
> Digging deeper, I instrumented ByteBuffAllocator in two ways:
>  # I added some trace loggings in getBuffer() and putbackBuffer() so I could 
> see the number of allocations, returns, pool size at the time, and a 
> stacktrace.
>  # I added netty's ResourceLeakDetector to the RefCnt class we use in 
> ByteBuffAllocator. Calling track() on creation of a RefCnt, record() in 
> RefCnt.retain(), and close() in RefCnt.deallocate().
> The ResourceLeakDetector immediately picked up leaks on start of the 
> regionserver. I saw leaks coming from both user requests and, as expected, 
> the Compactor. 
> I collected 21 unique LEAK stacktraces, which is too many to list. But all of 
> them 

[jira] [Created] (HBASE-27181) Replica region support in HBCK2 setRegionState option

2022-07-05 Thread Huaxiang Sun (Jira)
Huaxiang Sun created HBASE-27181:


 Summary: Replica region support in HBCK2 setRegionState option
 Key: HBASE-27181
 URL: https://issues.apache.org/jira/browse/HBASE-27181
 Project: HBase
  Issue Type: Improvement
  Components: hbck2
Affects Versions: 2.4.13
Reporter: Huaxiang Sun
Assignee: Huaxiang Sun


Replica region id is  not recognized by hbck2's setRegionState as it does not 
show up in meta. We run into cases that it needs to set region state in meta 
for replica regions in order to fix inconsistency. We ended up writing the 
state manually into meta table and did a master failover to sync state from 
meta table. 

 

hbck2's setRegionState needs to support replica region id and handles it nicely.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27170) ByteBuffAllocator leak when decompressing blocks near minSizeForReservoirUse

2022-07-05 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562819#comment-17562819
 ] 

Hudson commented on HBASE-27170:


Results for branch master
[build #626 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/626/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/626/General_20Nightly_20Build_20Report/]






(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/626/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/626/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ByteBuffAllocator leak when decompressing blocks near minSizeForReservoirUse
> 
>
> Key: HBASE-27170
> URL: https://issues.apache.org/jira/browse/HBASE-27170
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha-3, 2.4.12
> Environment: 2.4.6 with backported patches (can list if desired). 
> Running with temurinjdk11.0.12+7
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>  Labels: patch-available
> Fix For: 2.5.0, 3.0.0-alpha-4, 2.4.14
>
>
> Recently I started testing out BucketCache on some of our new hbase2 
> clusters. When BucketCache is enabled, it causes all disk reads to use 
> ByteBuffAllocator in an attempt to avoid heap allocations. Without 
> BucketCache enabled, even with ByteBuffAllocator enabled it will not be used 
> for disk reads.
> At first this was amazing, we had close to 0% heap allocations which 
> drastically reduces CPU and GC time. Over time I noticed that the 
> ByteBuffAllocator pool filled up, and at that point all allocations come from 
> the heap and our heap allocation % goes to 100%.
> We were using default max buffer count, which was 4096 for a smaller host and 
> 7680 for a larger one. At first I figured we just needed more buffers, so I 
> upped it to 120k. It took longer, but still eventually exhausts the pool.
> This does not cause an OOM, because I made sure to allocate enough direct 
> memory for max buffer count * buffer size (65k). It just causes the 
> usedBufCount to exceed maxBufCount, resulting in 100% heap allocations going 
> forward. It never recovers from this state until the server is restarted.
> Some early observations:
>  * Running a major compaction causes a drastic up-tick in used buffers. Major 
> compacting a 1.5GB region could easily expand the usedBufCount by ~10,000. 
> Most of those are never recovered. I could keep compacting the same region 
> over and over, each time increasing the usedBufCount by 5000-15000, until max 
> is reached.
>  * Despite usedBufCount increasing, direct memory usage largely does not 
> increase. This indicates to me that the DirectByteBuffers are being reclaimed 
> by GC, but the Recycler is not being called.
>  ** This was confirmed with a heap dump, which showed really no obvious leak 
> in the typical sense. There were very very few DirectByteBuffers with 
> capacity 66560.
>  * Enabling BucketCache triggers use of ByteBuffAllocator, but I don't think 
> it's related to the actual problem. First of all, compactions skip the cache 
> so that would be odd. Secondly, I disabled hbase.block.data.cacheonread and 
> all other CacheConfig, so the BucketCache is going unused (0 bytes used) and 
> the problem still persists.
> Digging deeper, I instrumented ByteBuffAllocator in two ways:
>  # I added some trace loggings in getBuffer() and putbackBuffer() so I could 
> see the number of allocations, returns, pool size at the time, and a 
> stacktrace.
>  # I added netty's ResourceLeakDetector to the RefCnt class we use in 
> ByteBuffAllocator. Calling track() on creation of a RefCnt, record() in 
> RefCnt.retain(), and close() in RefCnt.deallocate().
> The ResourceLeakDetector immediately picked up leaks on start of the 
> regionserver. I saw leaks coming from both user requests and, as expected, 
> the Compactor. 
> I collected 21 unique LEAK stacktraces, which is too many to list. But all of 
> them had 2 common roots, so below I post a full stacktrace for 2 examples of 
> those roots:
> {code:java}
> Created at:
> org.apache.hadoop.hbase.nio.RefCnt.(RefCnt.java:58)
> 

[GitHub] [hbase] virajjasani commented on pull request #4597: Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


virajjasani commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175415861

   Let me try deploying the changes over to the secure k8s hbase cluster


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] virajjasani commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


virajjasani commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1175404092

   I think we have some other issues with the server side changes, I see 
multiple compaction failures:
   ```
   2022-07-05 19:00:06,828 WARN  [20-longCompactions-0] hdfs.DFSClient - Failed 
to connect to dn1/10.118.172.92:50010 for block 
BP-1698803826-10.118.165.103-1656451448670:blk_1074099523_359025, add to 
deadNodes and continue. 
   java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:539)
at 
org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2913)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:851)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:753)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:387)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:843)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:772)
at 
org.apache.hadoop.hdfs.DFSInputStream.seekToBlockSource(DFSInputStream.java:1825)
at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:1035)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:1074)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1138)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:148)
at 
org.apache.hadoop.hbase.io.util.BlockIOUtils.readWithExtra(BlockIOUtils.java:180)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1450)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1681)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1492)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1308)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readNextDataBlock(HFileReaderImpl.java:739)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$EncodedScanner.next(HFileReaderImpl.java:1480)
at 
org.apache.hadoop.hbase.io.HalfStoreFileReader$1.next(HalfStoreFileReader.java:140)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:194)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:112)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:677)
at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:417)
at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:345)
at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:122)
at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1464)
at 
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2286)
at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:618)
at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:666)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
   ```
   
   ```
   2022-07-05 18:59:43,010 WARN  [20-longCompactions-0] impl.BlockReaderFactory 
- I/O error constructing remote block reader.
   java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:539)
at 
org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2913)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:851)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:753)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:387)
at 

[GitHub] [hbase] shahrs87 commented on a diff in pull request #4556: HBASE-26913 Replication Observability Framework

2022-07-05 Thread GitBox


shahrs87 commented on code in PR #4556:
URL: https://github.com/apache/hbase/pull/4556#discussion_r914113473


##
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationMarkerChore.java:
##
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import static 
org.apache.hadoop.hbase.replication.master.ReplicationSinkTrackerTableCreator.REPLICATION_SINK_TRACKER_TABLE_NAME;
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.RegionInfoBuilder;
+import org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.regionserver.wal.WALUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.wal.WAL;
+import org.apache.hadoop.hbase.wal.WALEdit;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * This chore is responsible to create replication marker rows with special 
WALEdit with family as
+ * {@link org.apache.hadoop.hbase.wal.WALEdit#METAFAMILY} and column qualifier 
as
+ * {@link WALEdit#REPLICATION_MARKER} and empty value. If config key
+ * {@link #REPLICATION_MARKER_ENABLED_KEY} is set to true, then we will create 
1 marker row every
+ * {@link #REPLICATION_MARKER_CHORE_DURATION_KEY} ms
+ * {@link 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader} 
will populate
+ * the Replication Marker edit with region_server_name, wal_name and 
wal_offset encoded in
+ * {@link 
org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.ReplicationMarkerDescriptor}
+ * object. {@link 
org.apache.hadoop.hbase.replication.regionserver.Replication} will change the
+ * REPLICATION_SCOPE for this edit to GLOBAL so that it can replicate. On the 
sink cluster,
+ * {@link org.apache.hadoop.hbase.replication.regionserver.ReplicationSink} 
will convert the
+ * ReplicationMarkerDescriptor into a Put mutation to 
REPLICATION_SINK_TRACKER_TABLE_NAME_STR table.
+ */
+@InterfaceAudience.Private
+public class ReplicationMarkerChore extends ScheduledChore {
+  private static final Logger LOG = 
LoggerFactory.getLogger(ReplicationMarkerChore.class);
+  private static final MultiVersionConcurrencyControl MVCC = new 
MultiVersionConcurrencyControl();
+  public static final RegionInfo REGION_INFO =
+RegionInfoBuilder.newBuilder(REPLICATION_SINK_TRACKER_TABLE_NAME).build();
+  private static final String DELIMITER = "_";
+  private final Configuration conf;
+  private final RegionServerServices rsServices;
+  private WAL wal;
+
+  public static final String REPLICATION_MARKER_ENABLED_KEY =
+"hbase.regionserver.replication.marker.enabled";
+  public static final boolean REPLICATION_MARKER_ENABLED_DEFAULT = false;
+
+  public static final String REPLICATION_MARKER_CHORE_DURATION_KEY =
+"hbase.regionserver.replication.marker.chore.duration";
+  public static final int REPLICATION_MARKER_CHORE_DURATION_DEFAULT = 30 * 
1000; // 30 seconds
+
+  public ReplicationMarkerChore(final Stoppable stopper, final 
RegionServerServices rsServices,
+int period, Configuration conf) {
+super("ReplicationTrackerChore", stopper, period);
+this.conf = conf;
+this.rsServices = rsServices;
+  }
+
+  @Override
+  protected void chore() {
+if (wal == null) {
+  try {
+wal = rsServices.getWAL(null);
+  } catch (IOException ioe) {
+LOG.warn("Unable to get WAL ", ioe);
+// Shouldn't happen. Ignore and wait for the next chore run.
+return;
+  }
+}
+String serverName = rsServices.getServerName().getServerName();
+long timeStamp = EnvironmentEdgeManager.currentTime();
+// We only have timestamp in ReplicationMarkerDescriptor and the remaining 
properties walname,
+// regionserver name and wal offset at 

[GitHub] [hbase] shahrs87 commented on a diff in pull request #4556: HBASE-26913 Replication Observability Framework

2022-07-05 Thread GitBox


shahrs87 commented on code in PR #4556:
URL: https://github.com/apache/hbase/pull/4556#discussion_r914112082


##
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationSinkTrackerTableCreator.java:
##
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.master;
+
+import static org.apache.hadoop.hbase.HConstants.NO_NONCE;
+
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * This will create {@link #REPLICATION_SINK_TRACKER_TABLE_NAME_STR} table if
+ * hbase.regionserver.replication.sink.tracker.enabled config key is enabled 
and table not created
+ **/
+@InterfaceAudience.Private
+public final class ReplicationSinkTrackerTableCreator {
+  private static final Logger LOG =
+LoggerFactory.getLogger(ReplicationSinkTrackerTableCreator.class);
+  private static final Long TTL = TimeUnit.DAYS.toSeconds(365); // 1 year in 
seconds
+
+  public static final byte[] RS_COLUMN = Bytes.toBytes("region_server_name");
+  public static final byte[] WAL_NAME_COLUMN = Bytes.toBytes("wal_name");
+  public static final byte[] TIMESTAMP_COLUMN = Bytes.toBytes("timestamp");
+  public static final byte[] OFFSET_COLUMN = Bytes.toBytes("offset");
+
+  /** Will create {@link #REPLICATION_SINK_TRACKER_TABLE_NAME_STR} table if 
this conf is enabled **/
+  public static final String REPLICATION_SINK_TRACKER_ENABLED_KEY =

Review Comment:
   @Apache9 Do you still think we need changes in the ref guide or can we 
resolve this conversation ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (HBASE-27173) Consider adding ResourceLeakDetector to RefCnt

2022-07-05 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault resolved HBASE-27173.
---
Resolution: Duplicate

This ended up being added in HBASE-27170

> Consider adding ResourceLeakDetector to RefCnt
> --
>
> Key: HBASE-27173
> URL: https://issues.apache.org/jira/browse/HBASE-27173
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Priority: Minor
>
> In my quest to figure out HBASE-27170, I ended up adding ResourceLeakDetector 
> to RefCnt in my local fork. This immediately reports the leak to the logs, 
> making it hard to miss. ResourceLeakDetector is already active in the netty 
> layer, and the default state is designed to be very low overhead. We could 
> add it directly to RefCnt, or make it further configurable if we're concerned 
> about performance. Either way it's good to have and document, to aid future 
> leak investigations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27180) Multiple possible buffer leaks

2022-07-05 Thread Norman Maurer (Jira)
Norman Maurer created HBASE-27180:
-

 Summary: Multiple possible buffer leaks
 Key: HBASE-27180
 URL: https://issues.apache.org/jira/browse/HBASE-27180
 Project: HBase
  Issue Type: Bug
Reporter: Norman Maurer


When using ByteBuf you need to be very careful about releasing it as otherwise 
you might leak data. There are various places in the code-base where such a 
leak could happen as the buffer is not correctly released



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #4598: HBASE-23330: Fix delegation token fetch with MasterRegistry (#1084)

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4598:
URL: https://github.com/apache/hbase/pull/4598#issuecomment-1175331208

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 29s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   4m 16s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  branch-2 passed  |
   | +1 :green_heart: |  spotless  |   0m 47s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 10s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 14s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   8m 10s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotless  |   0m 43s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   4m  3s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  41m 11s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4598/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4598 |
   | JIRA Issue | HBASE-23330 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux b211aa9522eb 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / 57a3781160 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 69 (vs. ulimit of 12500) |
   | modules | C: hbase-client hbase-server hbase-mapreduce hbase-thrift U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4598/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1175328972

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  1s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  9s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 23s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 53s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 45s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 58s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 55s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  The patch passed checkstyle 
in hbase-client  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  hbase-server: The patch 
generated 0 new + 12 unchanged - 2 fixed = 12 total (was 14)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  11m 48s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotless  |   0m 43s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 19s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  36m 47s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4596 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux a80252b30781 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 64 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/3/console 
|
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on a diff in pull request #4595: HBASE-26950 Use AsyncConnection in ReplicationSink

2022-07-05 Thread GitBox


bbeaudreault commented on code in PR #4595:
URL: https://github.com/apache/hbase/pull/4595#discussion_r914042965


##
hbase-server/src/test/java/org/apache/hadoop/hbase/client/DummyAsyncTable.java:
##
@@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Function;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.filter.Filter;
+
+/**
+ * Can be overridden in UT if you only want to implement part of the methods 
in {@link AsyncTable}.
+ */
+public class DummyAsyncTable implements 
AsyncTable {

Review Comment:
   is this actually used anywhere?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on a diff in pull request #4595: HBASE-26950 Use AsyncConnection in ReplicationSink

2022-07-05 Thread GitBox


bbeaudreault commented on code in PR #4595:
URL: https://github.com/apache/hbase/pull/4595#discussion_r914040557


##
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionUtils.java:
##
@@ -619,4 +615,13 @@ static void updateStats(Optional 
optStats,
 metrics -> ResultStatsUtil.updateStats(metrics, serverName, 
regionName, regionLoadStats));
 });
   }
+
+  public static AsyncConnection createAsyncConnection(Configuration conf) 
throws IOException {
+return FutureUtils.get(ConnectionFactory.createAsyncConnection(conf));
+  }

Review Comment:
   What's the benefit of this, versus just doing 
`ConnectionFactory.createAsyncConnection(conf).get()` in ReplicationSink?
   
   And do we need both of these methods, looks like just one is used?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on pull request #4585: HBASE-27078 Allow configuring a separate timeout for meta scans (branch-2 backport)

2022-07-05 Thread GitBox


bbeaudreault commented on PR #4585:
URL: https://github.com/apache/hbase/pull/4585#issuecomment-1175323321

   @apurtell one more ping, if you have time. Since this is just a backport 
I'll go ahead merging in the next day or two if I don't hear any concerns. 
Here's the relevant part I'd like another opinion on:
   
   > Another piece of complexity, covered by my 3rd commit, is that 
unfortunately in branch-2 scans use hbase.rpc.timeout instead of 
hbase.read.rpc.timeout. If anyone thinks its allowed, I'd love to fix that. But 
I assume it's not allowed, so the 3rd commit handles the case that we want to 
use hbase.rpc.timeout for normal scans and hbase.client.meta.read.rpc.timeout 
for meta scans. This will be unified in 3.0.0 where scans will use 
hbase.read.rpc.timeout and hbase.client.meta.read.rpc.timeout for both 
AsyncTable and Table.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27078) Allow configuring a separate timeout for meta scans

2022-07-05 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault updated HBASE-27078:
--
Release Note: If may be helpful to tune timeouts differently for meta scans 
than normal scans. Similar to hbase.read.rpc.timeout and 
hbase.client.scanner.timeout.period for normal scans, this issue adds two new 
configs for meta scans: hbase.client.meta.read.rpc.timeout and 
hbase.client.meta.scanner.timeout.period.  Each meta scan RPC call will be 
limited by hbase.client.meta.read.rpc.timeout, while 
hbase.client.meta.scanner.timeout.period acts as an overall operation timeout.

> Allow configuring a separate timeout for meta scans
> ---
>
> Key: HBASE-27078
> URL: https://issues.apache.org/jira/browse/HBASE-27078
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>  Labels: patch-available
> Fix For: 2.5.0, 3.0.0-alpha-4
>
>
> There is a {{hbase.client.meta.operation.timeout}} but it does not apply to 
> meta scans, which are the primary use-case for clients (i.e. through 
> RegionLocator). 
> Many user-facing clients may want to have low rpc and scan timeouts. However, 
> in periods of meta hotspotting, those timeouts can be way too low for the 
> meta scans. The problem with low timeouts for meta scans is that without a 
> populated MetaCache, user requests cannot succeed. In fact, user requests 
> will continually try to re-scan meta until the MetaCache is populated. So 
> having a lower rpc timeout will cause a situation where meta scans cannot 
> succeed, and thus user requests cannot succeed. In this case I think it'd be 
> preferable to relax the rpc timeout for meta requests so that a few long 
> requests can unblock many faster requests.
> My suggestion would be to add an {{hbase.client.meta.rpc.timeout}} and ensure 
> that it applies to meta scans. I also think it would be less confusing to 
> have {{hbase.client.meta.operation.timeout}} apply as the scanner timeout 
> period for meta scans.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] virajjasani commented on pull request #4597: Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


virajjasani commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175309341

   Thanks a lot for helping with PR @normanmaurer.
   FYI @Apache9, some of the touch() calls as well as some generic fixes in 
AsyncFS done as part of this PR would be really helpful IMHO.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 opened a new pull request, #4598: HBASE-23330: Fix delegation token fetch with MasterRegistry (#1084)

2022-07-05 Thread GitBox


Apache9 opened a new pull request, #4598:
URL: https://github.com/apache/hbase/pull/4598

   Signed-off-by: Andrew Purtell 
   (cherry picked from commit d8b3f55518fcf73df54f67ab1cb2f3920088d70d)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4597: Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4597:
URL: https://github.com/apache/hbase/pull/4597#issuecomment-1175269440

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 21s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 30s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 45s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 45s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   2m 44s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 10s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 40s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 40s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  11m 36s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | -1 :x: |  spotless  |   0m 18s |  patch has 31 errors when running 
spotless:check, run spotless:apply to fix.  |
   | +1 :green_heart: |  spotbugs  |   3m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  39m 59s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4597 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux dd1ab08d693d 5.4.0-96-generic #109-Ubuntu SMP Wed Jan 12 
16:49:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | spotless | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/1/artifact/yetus-general-check/output/patch-spotless.txt
 |
   | Max. process+thread count | 64 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-client hbase-asyncfs hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4597/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] normanmaurer opened a new pull request, #4597: Fix multiple possible buffer leaks

2022-07-05 Thread GitBox


normanmaurer opened a new pull request, #4597:
URL: https://github.com/apache/hbase/pull/4597

   Motivation:
   
   When using ByteBuf you need to be very careful about releasing it as 
otherwise you might leak data. There were various places in the code-base where 
such a leak could happen.
   
   Modifications:
   
   - Fix possible buffer leaks
   - Ensure we call touch(...) so its easier to debug buffer leaks
   
   Result:
   
   Fix buffer leaks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4589: HBASE-27172 Upgrade OpenTelemetry dependency to 1.15.0

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4589:
URL: https://github.com/apache/hbase/pull/4589#issuecomment-1175223027

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 40s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 18s |  master passed  |
   | +1 :green_heart: |  compile  |   5m 52s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 41s |  branch has no errors when 
running spotless:check.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 45s |  the patch passed  |
   | +1 :green_heart: |  javac  |   5m 45s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  hadoopcheck  |  11m 45s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotless  |   0m 39s |  patch has no errors when 
running spotless:check.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 16s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  35m 34s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4589/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4589 |
   | Optional Tests | dupname asflicense javac hadoopcheck spotless xml compile 
|
   | uname | Linux 63311cc1727a 5.4.0-1025-aws #25~18.04.1-Ubuntu SMP Fri Sep 
11 12:03:04 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 137 (vs. ulimit of 3) |
   | modules | C: hbase-assembly . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4589/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27155) Improvements to low level scanner tracing

2022-07-05 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562731#comment-17562731
 ] 

Nick Dimiduk commented on HBASE-27155:
--

[~apurtell]

I haven't performed a 1-to-1 audit of these metrics vs. the data i'm attaching 
to span events on HBASE-27153, but a quick look makes me think that we could 
collect all this data via tracing. It may be challenging to extract the span 
event data from the span repository -- I haven't tried interacting with the 
likes of Jaeger or X-Ray at that API level. That would be the next step for 
replicating the above.

Let me take a closer look at the metrics you expose in your patch and see what 
I can do with otel.

> Improvements to low level scanner tracing
> -
>
> Key: HBASE-27155
> URL: https://issues.apache.org/jira/browse/HBASE-27155
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners, tracing
>Reporter: Andrew Kyle Purtell
>Priority: Major
>
> Related to HBASE-27153, consider tracer semantic attributes for low level 
> scanner details.
> Consider 
> https://issues.apache.org/jira/secure/attachment/13006571/W-7665966-Instrument-low-level-scan-details-branch-2.2.patch
>  (from HBASE-24637).  This was used to collect detailed metrics of the 
> decisions of ScanQueryMatcher and related classes.
> {noformat}
> metrics: [ "block_read_keys": 477 "block_read_ns": 3427040
>  "block_reads": 13 "block_seek_ns": 1606370 "block_seeks": 169
>  "block_unpack_ns": 10256 "block_unpacks": 13 "cells_matched": 165
>  "cells_matched__hbase:meta,,1.1588230740__info": 165
>  "column_hint_include": 148 "memstore_next": 72
>  "memstore_next_ns": 136671 "memstore_seek": 2
>  "memstore_seek_ns": 631629 "reseeks": 36 "sqm_hint_done": 17
>  "sqm_hint_include": 74 "sqm_hint_seek_next_col": 74
>  "store_next": 276
>  "store_next__1c930a35ff8041368a05817adbdcce97": 40
>  "store_next__2644194fdf794815abdc940c183dab88": 40
>  "store_next__32ce31753fb244668f788fb94ab02dff": 40
>  "store_next__61c8423b9d8846c99a61cd2996b5b621": 116
>  "store_next__f4f7878c9fcf40d9902416d5c7a4097a": 40
>  "store_next_ns": 1891634
>  "store_next_ns__1c930a35ff8041368a05817adbdcce97": 269383
>  "store_next_ns__2644194fdf794815abdc940c183dab88": 299936
>  "store_next_ns__32ce31753fb244668f788fb94ab02dff": 288594
>  "store_next_ns__61c8423b9d8846c99a61cd2996b5b621": 594313
>  "store_next_ns__f4f7878c9fcf40d9902416d5c7a4097a": 439408
>  "store_reseek": 164
>  "store_reseek__1c930a35ff8041368a05817adbdcce97": 32
>  "store_reseek__2644194fdf794815abdc940c183dab88": 32
>  "store_reseek__32ce31753fb244668f788fb94ab02dff": 32
>  "store_reseek__61c8423b9d8846c99a61cd2996b5b621": 36
>  "store_reseek__f4f7878c9fcf40d9902416d5c7a4097a": 32
>  "store_reseek_ns": 2969978
>  "store_reseek_ns__1c930a35ff8041368a05817adbdcce97": 359489
>  "store_reseek_ns__2644194fdf794815abdc940c183dab88": 595115
>  "store_reseek_ns__32ce31753fb244668f788fb94ab02dff": 474642
>  "store_reseek_ns__61c8423b9d8846c99a61cd2996b5b621": 1013188
>  "store_reseek_ns__f4f7878c9fcf40d9902416d5c7a4097a": 527544
>  "store_seek": 5
>  "store_seek__1c930a35ff8041368a05817adbdcce97": 1
>  "store_seek__2644194fdf794815abdc940c183dab88": 1
>  "store_seek__32ce31753fb244668f788fb94ab02dff": 1
>  "store_seek__61c8423b9d8846c99a61cd2996b5b621": 1
>  "store_seek__f4f7878c9fcf40d9902416d5c7a4097a": 1
>  "store_seek_ns": 8862786
>  "store_seek_ns__1c930a35ff8041368a05817adbdcce97": 830421
>  "store_seek_ns__2644194fdf794815abdc940c183dab88": 585899
>  "store_seek_ns__32ce31753fb244668f788fb94ab02dff": 483605
>  "store_seek_ns__61c8423b9d8846c99a61cd2996b5b621": 5958072
>  "store_seek_ns__f4f7878c9fcf40d9902416d5c7a4097a": 1004789
>  "versions_hint_include": 74 "versions_hint_seek_next_col": 74 ]
> {noformat}
> We can see the differences between seek time and reseek time and we get the 
> counts for same, so we can analyze if SQM is making optimal choices (or less 
> optimal choices) or not, or if behavior has changed; and we can identify 
> particular store file(s) that might be outliers for some reason when hunting 
> for sources of regression. We get the time required to unpack blocks (on 
> average). We get a count of hints supplied by base SQM functionality or 
> filters. We get the relative contributions of query processing time 
> separately from memstore and store files. 
> Perhaps this can be done conditionally for scans that are selected for 
> tracing. Of course there is a performance concern, so it must be done such 
> that the overheads really are conditional on if the path is being actively 
> traced, and measured carefully to decide if it should be committed or not. 
> WDYT [~ndimiduk] [~zhangduo]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #4591: Backport "HBASE-27172 Upgrade OpenTelemetry dependency to 1.15.0" to branch-2.5

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4591:
URL: https://github.com/apache/hbase/pull/4591#issuecomment-1175221912

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2.5 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 31s |  branch-2.5 passed  |
   | +1 :green_heart: |  compile  |   6m 27s |  branch-2.5 passed  |
   | +1 :green_heart: |  spotless  |   0m 44s |  branch has no errors when 
running spotless:check.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 29s |  the patch passed  |
   | +1 :green_heart: |  javac  |   6m 29s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  hadoopcheck  |  11m 27s |  Patch does not cause any 
errors with Hadoop 2.10.0 or 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotless  |   0m 42s |  patch has no errors when 
running spotless:check.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 21s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  37m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4591/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4591 |
   | Optional Tests | dupname asflicense javac hadoopcheck spotless xml compile 
|
   | uname | Linux 3a12a7d38503 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.5 / e3963458b1 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 138 (vs. ulimit of 12500) |
   | modules | C: hbase-assembly . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4591/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27153) Improvements to read-path tracing

2022-07-05 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562719#comment-17562719
 ] 

Nick Dimiduk commented on HBASE-27153:
--

I'm checking to see if the test failures are related. I think they're the build 
machine being underpowered.

> Improvements to read-path tracing
> -
>
> Key: HBASE-27153
> URL: https://issues.apache.org/jira/browse/HBASE-27153
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability, regionserver
>Affects Versions: 2.5.0, 3.0.0-alpha-2, 2.6.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-4
>
>
> Take another pass through tracing of the read path, make adjustments 
> accordingly. One of the major concerns raised previously is that we create a 
> span for every block access. Start by simplifying this to trace events and 
> see what else comes up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27153) Improvements to read-path tracing

2022-07-05 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562718#comment-17562718
 ] 

Nick Dimiduk commented on HBASE-27153:
--

Yes, if we can.

> Improvements to read-path tracing
> -
>
> Key: HBASE-27153
> URL: https://issues.apache.org/jira/browse/HBASE-27153
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability, regionserver
>Affects Versions: 2.5.0, 3.0.0-alpha-2, 2.6.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-4
>
>
> Take another pass through tracing of the read path, make adjustments 
> accordingly. One of the major concerns raised previously is that we create a 
> span for every block access. Start by simplifying this to trace events and 
> see what else comes up.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-14452) Allow enabling tracing from configuration

2022-07-05 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-14452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-14452.
--
Resolution: Not A Problem

OpenTelemetry solves this problem at the process level via Sampler 
configuration. I think there's nothing additional for us to do.

> Allow enabling tracing from configuration
> -
>
> Key: HBASE-14452
> URL: https://issues.apache.org/jira/browse/HBASE-14452
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Operability
>Reporter: Nick Dimiduk
>Priority: Major
>
> Over on HDFS-8213 [~colinmccabe] convinced me that we should enable operators 
> to trace HDFS requests independent of applications enabling the same. At the 
> risk of adding a new, superset configuration, I think we should allow the 
> same for HBase. Any objections to following HDFS's lead on this?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-26046) [JDK17] Add a JDK17 profile

2022-07-05 Thread Jira


[ 
https://issues.apache.org/jira/browse/HBASE-26046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562708#comment-17562708
 ] 

Ramón García Fernández commented on HBASE-26046:


One does not really need a profile specific for JDK 17. All the  flags here can 
be used with JDK 11 , and adding a new profile adds unnecessary complexity.

> [JDK17] Add a JDK17 profile
> ---
>
> Key: HBASE-26046
> URL: https://issues.apache.org/jira/browse/HBASE-26046
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> While HBase builds fine with JDK17, tests fail because a number of Java SDK 
> modules are no longer exposed to unnamed modules by default. We need to open 
> them up.
> Without which, the tests fail for errors like:
> {noformat}
> [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 0.469 
> s <<< FAILURE! - in org.apache.hadoop.hbase.rest.model.TestNamespacesModel
> [ERROR] org.apache.hadoop.hbase.rest.model.TestNamespacesModel.testBuildModel 
>  Time elapsed: 0.273 s  <<< ERROR!
> java.lang.ExceptionInInitializerError
> at 
> org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43)
> Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make 
> protected final java.lang.Class 
> java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws 
> java.lang.ClassFormatError accessible: module java.base does not "opens 
> java.lang" to unnamed module @56ef9176
> at 
> org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] bbeaudreault commented on a diff in pull request #4410: HBASE-27002 Config BucketCache as victim handler of LRUCache

2022-07-05 Thread GitBox


bbeaudreault commented on code in PR #4410:
URL: https://github.com/apache/hbase/pull/4410#discussion_r913756205


##
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java:
##
@@ -113,7 +117,17 @@ public static BlockCache createBlockCache(Configuration 
conf) {
 LOG.warn(
   "From HBase 2.0 onwards only combined mode of LRU cache and bucket 
cache is available");
   }
-  return bucketCache == null ? l1Cache : new CombinedBlockCache(l1Cache, 
bucketCache);
+
+  if (bucketCache == null) {
+return l1Cache;
+  }
+
+  if (conf.getBoolean(BLOCKCACHE_VICTIM_HANDLER_ENABLED_KEY,
+  BLOCKCACHE_VICTIM_HANDLER_ENABLED_DEFAULT)) {
+return new InclusiveCombinedBlockCache(l1Cache, bucketCache);

Review Comment:
   @liangxs what do you think about having the victim cache still be 
preferential?
   
   DATA goes direct to BucketCache
   META goes direct to LRU, and victims flush to BucketCache
   
   This seems like a good compromise of the performance. META still prefers 
faster LRU on-heap, but if too big to fit ends up in BucketCache. 
   
   If we can agree to that, personally I think it should just be the default -- 
no new config. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on a diff in pull request #4410: HBASE-27002 Config BucketCache as victim handler of LRUCache

2022-07-05 Thread GitBox


bbeaudreault commented on code in PR #4410:
URL: https://github.com/apache/hbase/pull/4410#discussion_r913756205


##
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java:
##
@@ -113,7 +117,17 @@ public static BlockCache createBlockCache(Configuration 
conf) {
 LOG.warn(
   "From HBase 2.0 onwards only combined mode of LRU cache and bucket 
cache is available");
   }
-  return bucketCache == null ? l1Cache : new CombinedBlockCache(l1Cache, 
bucketCache);
+
+  if (bucketCache == null) {
+return l1Cache;
+  }
+
+  if (conf.getBoolean(BLOCKCACHE_VICTIM_HANDLER_ENABLED_KEY,
+  BLOCKCACHE_VICTIM_HANDLER_ENABLED_DEFAULT)) {
+return new InclusiveCombinedBlockCache(l1Cache, bucketCache);

Review Comment:
   @liangxs what do you think about having the victim cache still be 
preferential?
   
   DATA goes direct to BucketCache
   META goes direct to LRU, and victims flush to BucketCache
   
   This seems like a good compromise of the performance. META still prefers 
faster LRU on-heap, but if too big to fit ends up in BucketCache. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1175006416

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 55s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 52s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 53s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 28s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 13s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  | 243m 54s |  hbase-server in the patch failed.  |
   |  |   | 267m 46s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4596 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 4ffda11006cf 5.4.0-90-generic #101-Ubuntu SMP Fri Oct 15 
20:00:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/2/testReport/
 |
   | Max. process+thread count | 2656 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1174958969

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 41s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 20s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   3m 41s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 22s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   3m 42s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 18s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  | 193m 23s |  hbase-server in the patch failed.  |
   |  |   | 212m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4596 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 732cc688b3ca 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/2/testReport/
 |
   | Max. process+thread count | 3745 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


Apache-HBase commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1174782949

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 41s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 42s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 39s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 46s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 10s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  5s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 46s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 46s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 12s |  The patch passed checkstyle 
in hbase-client  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  hbase-server: The patch 
generated 0 new + 12 unchanged - 2 fixed = 12 total (was 14)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  11m 43s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.2 3.3.1.  |
   | +1 :green_heart: |  spotless  |   0m 39s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  34m 17s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4596 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 9119ad95c49a 5.4.0-1025-aws #25~18.04.1-Ubuntu SMP Fri Sep 
11 12:03:04 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / f3f292fad4 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 64 (vs. ulimit of 3) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4596/2/console 
|
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] 2005hithlj commented on a diff in pull request #4523: HBASE-27104 Add a tool command list_unknownservers

2022-07-05 Thread GitBox


2005hithlj commented on code in PR #4523:
URL: https://github.com/apache/hbase/pull/4523#discussion_r913506949


##
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java:
##
@@ -443,6 +443,8 @@ public class HMaster extends 
HBaseServerBase implements Maste
   /** jetty server for master to redirect requests to regionserver infoServer 
*/
   private Server masterJettyServer;
 
+  private Set unknownServers;

Review Comment:
   Thanks for the review.  @Apache9 
   I'll try another UT implementation, but I've been thinking about it for a 
long time and haven't figured out how to generate unknown servers yet. Do you 
have any suggestions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-07-05 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562401#comment-17562401
 ] 

Duo Zhang commented on HBASE-26708:
---

I guess the problem is in in NettyRpcServer's decoding code.

To reuse code with SimpleRpcServer, the implementation is not in netty style. 
If sasl is used, what we pass to the process method is actually a wrapped sasl 
message, not a rpc call message.

But in the implementation of NettyServerRpcConnection.process(ByteBuf) and  
NettyServerRpcConnection.process(ByteBuff), we save the buf::release to the 
callCleanup, and expect the later logic to save this cleanup hook into the 
ServerCall, and release it when the rpc call is finished.

For normal rpc calls without sasl, it is OK, as what we decode from netty is 
exactly a serialized rpc call. But if sasl is enabled, we are in trouble. 
Please see the code in ServerRpcConnection.processUnwrappedData, we need to use 
several buffers to read the data first, and then determine if the unwrapped 
data is enough to construct a rpc call. In this method, although we have 
already consumed the netty's ByteBuf, we will not call callCleanup to release 
the ByteBuf, and then memory leak happens.

Let me provide a PR to fix this problem first. In general, I prefer we just 
remove the SimpleRpcServer implementation and rewrite the decode and encode 
part with netty, to make the code more clear.

Thanks.

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   java.lang.Thread.run(Thread.java:748)
>  {code}
> {code:java}
> 2022-01-25 17:03:14,014 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - 
> apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> 

[jira] [Commented] (HBASE-26708) Netty "leak detected" and OutOfDirectMemoryError due to direct memory buffering

2022-07-05 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562380#comment-17562380
 ] 

Viraj Jasani commented on HBASE-26708:
--

Thanks [~zhangduo]. Posted some findings over the PR 
[here|https://github.com/apache/hbase/pull/4596#issuecomment-1174651104]

> Netty "leak detected" and OutOfDirectMemoryError due to direct memory 
> buffering
> ---
>
> Key: HBASE-26708
> URL: https://issues.apache.org/jira/browse/HBASE-26708
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Affects Versions: 2.5.0, 2.4.6
>Reporter: Viraj Jasani
>Priority: Critical
>
> Under constant data ingestion, using default Netty based RpcServer and 
> RpcClient implementation results in OutOfDirectMemoryError, supposedly caused 
> by leaks detected by Netty's LeakDetector.
> {code:java}
> 2022-01-25 17:03:10,084 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - java:115)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:538)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:274)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
>   
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>   
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   java.lang.Thread.run(Thread.java:748)
>  {code}
> {code:java}
> 2022-01-25 17:03:14,014 ERROR [S-EventLoopGroup-1-3] 
> util.ResourceLeakDetector - 
> apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
>   
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>   
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>   
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
>   
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
>   
> 

[GitHub] [hbase] Apache9 commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


Apache9 commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1174657599

   OK, the leak is on the decode path, where we read from the channel. Let me 
take a look with related classes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] virajjasani commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


virajjasani commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1174655400

   `ByteToMessageDecoder`:
   
   ```
   @Override
   public void channelRead(ChannelHandlerContext ctx, Object msg) throws 
Exception {
   if (msg instanceof ByteBuf) {
   selfFiredChannelRead = true;
   CodecOutputList out = CodecOutputList.newInstance();
   try {
   first = cumulation == null;
   cumulation = cumulator.cumulate(ctx.alloc(),
   first ? Unpooled.EMPTY_BUFFER : cumulation, 
(ByteBuf) msg);
   callDecode(ctx, cumulation, out);   <== 
Line 279
   } catch (DecoderException e) {
   throw e;
   } catch (Exception e) {
   throw new DecoderException(e);
   } finally {
   try {
   if (cumulation != null && !cumulation.isReadable()) {
   numReads = 0;
   cumulation.release();
   cumulation = null;
   } else if (++numReads >= discardAfterReads) {
   ...
   ...
   ...
   ```
   
   ```
   protected void callDecode(ChannelHandlerContext ctx, ByteBuf in, 
List out) {
   try {
   while (in.isReadable()) {
   final int outSize = out.size();
   
   if (outSize > 0) {
   fireChannelRead(ctx, out, outSize);
   out.clear();
   
   // Check if this handler was removed before continuing 
with decoding.
   // If it was removed, it is not safe to continue to 
operate on the buffer.
   //
   // See:
   // - https://github.com/netty/netty/issues/4635
   if (ctx.isRemoved()) {
   break;
   }
   }
   
   int oldInputLength = in.readableBytes();
   decodeRemovalReentryProtection(ctx, in, out);  
<== (Line 449)
   
   // Check if this handler was removed before continuing the 
loop.
   // If it was removed, it is not safe to continue to operate 
on the buffer.
   //
   // See https://github.com/netty/netty/issues/1664
   if (ctx.isRemoved()) {
   break;
   }
   ...
   ...
   ...
   ```
   
   ```
   final void decodeRemovalReentryProtection(ChannelHandlerContext ctx, 
ByteBuf in, List out)
   throws Exception {
   decodeState = STATE_CALLING_CHILD_DECODE;
   try {
   decode(ctx, in, out);< (Line 510)
   } finally {
   boolean removePending = decodeState == 
STATE_HANDLER_REMOVED_PENDING;
   decodeState = STATE_INIT;
   if (removePending) {
   fireChannelRead(ctx, out, out.size());
   out.clear();
   handlerRemoved(ctx);
   }
   }
   }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] virajjasani commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


virajjasani commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1174653213

   Do you think `handlerRemoved` should be overridden in 
`SaslChallengeDecoder`? (though `ByteToMessageDecoder` has `handlerRemoved` 
already)
   
   Somehow I feel leak needs to be handled in decoder as per the below logs, I 
might be wrong though.
   ```
   2022-07-05 05:59:52,308 ERROR [S-EventLoopGroup-1-4] 
util.ResourceLeakDetector - 
apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510)

org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:449)

org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] virajjasani commented on pull request #4596: HBASE-26708 Simplity wrap implementation

2022-07-05 Thread GitBox


virajjasani commented on PR #4596:
URL: https://github.com/apache/hbase/pull/4596#issuecomment-1174651104

   Unfortunately the leaks are still present with this patch:
   
   ```
   2022-07-05 05:59:56,052 ERROR [S-EventLoopGroup-1-4] 
util.ResourceLeakDetector - 
ByteToMessageDecoder.expandCumulation(ByteToMessageDecoder.java:541)

org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:97)

org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:277)

org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)

org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)

org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)

org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)

org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)

org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)

org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)

org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)

org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)

org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)

org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)

org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:750)
   : 27 leak records were discarded because they were duplicates
   ```
   
   
   ```
   2022-07-05 05:59:56,048 ERROR [S-EventLoopGroup-1-4] 
util.ResourceLeakDetector - oll.EpollEventLoop.run(EpollEventLoop.java:378)

org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)

org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:750)
   Created at:

org.apache.hbase.thirdparty.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:401)

org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)

org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)

org.apache.hbase.thirdparty.io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53)

org.apache.hbase.thirdparty.io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:120)

org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75)

org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:785)

org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)

org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)

org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)

org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:750)
   : 8 leak records were discarded because they were duplicates
   ```
   
   ```
   2022-07-05 05:59:52,308 ERROR [S-EventLoopGroup-1-4] 
util.ResourceLeakDetector - 
apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510)