zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r376870439
 
 

 ##########
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/ZeroCopyNettyMessageDecoder.java
 ##########
 @@ -0,0 +1,280 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.FRAME_HEADER_LENGTH;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.MAGIC_NUMBER;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Decodes messages from the fragmentary netty buffers. This decoder assumes 
the
+ * messages have the following format:
+ * +-----------------------------------+--------------------------------+
+ * | FRAME_HEADER ||  MESSAGE_HEADER   |     DATA BUFFER (Optional)     |
+ * +-----------------------------------+--------------------------------+
+ * and it decodes each part in order.
+ *
+ * This decoder tries best to eliminate copying. For the frame header and 
message header,
+ * it only cumulates data when they span multiple input buffers. For the 
buffer part, it
+ * copies directly to the input channels to avoid future copying.
+ *
+ * The format of the frame header is
+ * +------------------+------------------+--------+
+ * | FRAME LENGTH (4) | MAGIC NUMBER (4) | ID (1) |
+ * +------------------+------------------+--------+
+ */
+public class ZeroCopyNettyMessageDecoder extends ChannelInboundHandlerAdapter {
+
+    private static final int INITIAL_MESSAGE_HEADER_BUFFER_LENGTH = 128;
+
+    /** The parser to parse the message header. */
+    private final NettyMessageParser messageParser;
+
+    /** The buffer used to cumulate the frame header part. */
+    private ByteBuf frameHeaderBuffer;
+
+    /** The buffer used to receive the message header part. */
+    private ByteBuf messageHeaderBuffer;
+
+    /** Which part of the current message is being decoded. */
+    private DecodeStep decodeStep;
+
+    /** How many bytes have been decoded in current step. */
+    private int decodedBytesOfCurrentStep;
+
+    /** The intermediate state when decoding the current message. */
+    private final MessageDecodeIntermediateState intermediateState;
+
+    ZeroCopyNettyMessageDecoder(NettyMessageParser messageParser) {
+        this.messageParser = messageParser;
+        this.intermediateState = new MessageDecodeIntermediateState();
+    }
+
+    @Override
+    public void channelActive(ChannelHandlerContext ctx) throws Exception {
+        super.channelActive(ctx);
+
+        frameHeaderBuffer = 
ctx.alloc().directBuffer(NettyMessage.FRAME_HEADER_LENGTH);
 
 Review comment:
   > I think the writer side is different from the reader side in that we only 
have a message to deal with in the receiver side, however, we will have 
multiple sending messages at one time, and they all occupy header spaces.
   
   On write side, there is still only one header used at the same time. Once 
write&flush message by netty thread, if the socket is not writable that means 
the header message would not be recycled after writing, then the next message 
from partition would not continue writing into socket via judging 
`channel.isWritable()`. Once the channel becomes writable again to recycle the 
previous header buffer, then the next message would be written again via 
`PartitionRequestQueue#channelWritabilityChanged`. So it is the same case for 
both sender and receiver sides. Only one header buffer is actually used for 
every channel. Of course we can create a separate ticket for tracing the issue 
of reusing the same header buffer on sender side.
   
   > Beside, I think the reason to rely on the Netty ByteBuf is that to deal 
with different messages the messageHeaderBuffer needs to modify the capacity to 
be able to hold headers for different message. If we use a unPool buffer, we 
have to reimplement the logic to copy to a larger buffer. 
   
   I am not quite sure how much benefits we can get from the buffer resize 
implementation in netty stack. The header length size is fixed for 
`BufferResponse` and only `ErrorResponse` has unfixed length. But error message 
is less frequent and might not need reuse the same buffer always. We should 
consider the pros & cons for different options. From my point it is better to 
not rely on some internal implementation details of netty stack unless 
necessary. It is unsafe to expose the netty managed `ByteBuf` to outside which 
might cause inconsistent/leak issue if careless.
   
   > Besides, from Flink's perspective the Netty memory is allocated by the 
unit of Arena (Which is modified to 4M by us). This causes that as long as we 
do not consume more memory than 4M in Netty, it should be better to reuse the 
allocated memory.
   
   The chunk size adjustment should be orthogonal to this PR. Both ways would 
reduce the whole netty memory overhead in different cases. So it is better to 
measure them separately to see the effects. I would create a separate ticket 
for tracing this issue.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to