[ 
https://issues.apache.org/jira/browse/HDDS-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duong updated HDDS-9032:
------------------------
    Description: 
Ozone datanode today uses 2 separated Netty memory pool (or 
PooledByteBufAllocator) instances.

First pool instance is created with NettyServer (used by Ratis server and 
Replication server). All NettyServer instances share the same 
PooledByteBufAllocator instances (or the same direct memory pool) which is 
created and cached by  ByteBufAllocatorPreferDirectHolder.allocator.  This 
resolves to the usage of "io.grpc.netty.Utils#getByteBufAllocator".
{code:java}
public static ByteBufAllocator getByteBufAllocator(boolean forceHeapBuffer) {
  if (Boolean.parseBoolean(
          
System.getProperty("org.apache.ratis.thirdparty.io.grpc.netty.useCustomAllocator",
 "true"))) {
    boolean defaultPreferDirect = PooledByteBufAllocator.defaultPreferDirect();
    logger.log(
        Level.FINE,
        String.format(
            "Using custom allocator: forceHeapBuffer=%s, 
defaultPreferDirect=%s",
            forceHeapBuffer,
            defaultPreferDirect));
    if (forceHeapBuffer || !defaultPreferDirect) {
      return ByteBufAllocatorPreferHeapHolder.allocator;
    } else {
      return ByteBufAllocatorPreferDirectHolder.allocator;
    } {code}
The second instance is created from the usage of 
[CodecBuffer|https://github.com/apache/ozone/blob/master/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/CodecBuffer.java#L85-86].
 This CodecBuffer uses the default Netty memory pool, created and cached by 
PooledByteBufAllocator.DEFAULT, to create temporary caches to decode/encode 
data (from/to storage like RocksDb or network).
{code:java}
  private static final ByteBufAllocator POOL
      = PooledByteBufAllocator.DEFAULT; {code}
 

Netty has a decent set of config to ensure its memory pool usage doesn't exceed 
the JVM limits, aka, 

maxMemory

  was:
Ozone datanode today uses 2 separated Netty memory pool (or 
PooledByteBufAllocator) instances.

First pool instance is created with NettyServer (used by Ratis server and 
Replication server). All NettyServer instances share the same 
PooledByteBufAllocator instances (or the same direct memory pool) which is 
created and cached by  ByteBufAllocatorPreferDirectHolder.allocator.  This 
resolves to the usage of "io.grpc.netty.Utils#getByteBufAllocator".
{code:java}
public static ByteBufAllocator getByteBufAllocator(boolean forceHeapBuffer) {
  if (Boolean.parseBoolean(
          
System.getProperty("org.apache.ratis.thirdparty.io.grpc.netty.useCustomAllocator",
 "true"))) {
    boolean defaultPreferDirect = PooledByteBufAllocator.defaultPreferDirect();
    logger.log(
        Level.FINE,
        String.format(
            "Using custom allocator: forceHeapBuffer=%s, 
defaultPreferDirect=%s",
            forceHeapBuffer,
            defaultPreferDirect));
    if (forceHeapBuffer || !defaultPreferDirect) {
      return ByteBufAllocatorPreferHeapHolder.allocator;
    } else {
      return ByteBufAllocatorPreferDirectHolder.allocator;
    } {code}
The second instance is created from the usage of 
[CodecBuffer|https://github.com/apache/ozone/blob/master/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/CodecBuffer.java#L85-86].
 This CodecBuffer uses the default Netty memory pool, created and cached by 
PooledByteBufAllocator.DEFAULT, to create temporary caches to decode/encode 
data (from/to storage like RocksDb or network).
{code:java}
  private static final ByteBufAllocator POOL
      = PooledByteBufAllocator.DEFAULT; {code}
 

Netty have decent config to ensure its memory pool usage doesn't exceed 


> CodecBuffer results in Ozone Datanode using 2 separate Netty memory pool 
> instances
> ----------------------------------------------------------------------------------
>
>                 Key: HDDS-9032
>                 URL: https://issues.apache.org/jira/browse/HDDS-9032
>             Project: Apache Ozone
>          Issue Type: Bug
>            Reporter: Duong
>            Priority: Major
>
> Ozone datanode today uses 2 separated Netty memory pool (or 
> PooledByteBufAllocator) instances.
> First pool instance is created with NettyServer (used by Ratis server and 
> Replication server). All NettyServer instances share the same 
> PooledByteBufAllocator instances (or the same direct memory pool) which is 
> created and cached by  ByteBufAllocatorPreferDirectHolder.allocator.  This 
> resolves to the usage of "io.grpc.netty.Utils#getByteBufAllocator".
> {code:java}
> public static ByteBufAllocator getByteBufAllocator(boolean forceHeapBuffer) {
>   if (Boolean.parseBoolean(
>           
> System.getProperty("org.apache.ratis.thirdparty.io.grpc.netty.useCustomAllocator",
>  "true"))) {
>     boolean defaultPreferDirect = 
> PooledByteBufAllocator.defaultPreferDirect();
>     logger.log(
>         Level.FINE,
>         String.format(
>             "Using custom allocator: forceHeapBuffer=%s, 
> defaultPreferDirect=%s",
>             forceHeapBuffer,
>             defaultPreferDirect));
>     if (forceHeapBuffer || !defaultPreferDirect) {
>       return ByteBufAllocatorPreferHeapHolder.allocator;
>     } else {
>       return ByteBufAllocatorPreferDirectHolder.allocator;
>     } {code}
> The second instance is created from the usage of 
> [CodecBuffer|https://github.com/apache/ozone/blob/master/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/CodecBuffer.java#L85-86].
>  This CodecBuffer uses the default Netty memory pool, created and cached by 
> PooledByteBufAllocator.DEFAULT, to create temporary caches to decode/encode 
> data (from/to storage like RocksDb or network).
> {code:java}
>   private static final ByteBufAllocator POOL
>       = PooledByteBufAllocator.DEFAULT; {code}
>  
> Netty has a decent set of config to ensure its memory pool usage doesn't 
> exceed the JVM limits, aka, 
> maxMemory



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to