[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r391399168
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,339 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import 
org.apache.flink.runtime.io.network.partition.consumer.InputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.annotation.Nullable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final int NUMBER_OF_BUFFER_RESPONSES = 5;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   private InputChannelID releasedInputChannelId;
+
+   @Before
+   public void setUp() throws IOException, InterruptedException {
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   networkBufferPool = new NetworkBufferPool(
+   NUMBER_OF_BUFFER_RESPONSES,
+   BUFFER_SIZE,
+   NUMBER_OF_BUFFER_RESPONSES);
+   channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = createRemoteInputChannel(
+   inputGate,
+   new TestingPartitionRequestClient(),
+   networkBufferPool);
+   inputGate.assignExclusiveSegments();
+   inputChannel.requestSubpartition(0);
+   handler.addInputChannel(inputChannel);
+   inputChannelId = inputChannel.getInputChannelId();
+
+   SingleInputGate releasedInputGate = createSingleInputGate(1);
+   RemoteInputChannel releasedInputChannel = new 
InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndS

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r391397625
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,339 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import 
org.apache.flink.runtime.io.network.partition.consumer.InputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.annotation.Nullable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final int NUMBER_OF_BUFFER_RESPONSES = 5;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   private InputChannelID releasedInputChannelId;
+
+   @Before
+   public void setUp() throws IOException, InterruptedException {
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   networkBufferPool = new NetworkBufferPool(
+   NUMBER_OF_BUFFER_RESPONSES,
+   BUFFER_SIZE,
+   NUMBER_OF_BUFFER_RESPONSES);
+   channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = createRemoteInputChannel(
+   inputGate,
+   new TestingPartitionRequestClient(),
+   networkBufferPool);
+   inputGate.assignExclusiveSegments();
+   inputChannel.requestSubpartition(0);
+   handler.addInputChannel(inputChannel);
+   inputChannelId = inputChannel.getInputChannelId();
+
+   SingleInputGate releasedInputGate = createSingleInputGate(1);
+   RemoteInputChannel releasedInputChannel = new 
InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndS

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r391397736
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,339 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import 
org.apache.flink.runtime.io.network.partition.consumer.InputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.annotation.Nullable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final int NUMBER_OF_BUFFER_RESPONSES = 5;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   private InputChannelID releasedInputChannelId;
+
+   @Before
+   public void setUp() throws IOException, InterruptedException {
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   networkBufferPool = new NetworkBufferPool(
+   NUMBER_OF_BUFFER_RESPONSES,
+   BUFFER_SIZE,
+   NUMBER_OF_BUFFER_RESPONSES);
+   channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = createRemoteInputChannel(
+   inputGate,
+   new TestingPartitionRequestClient(),
+   networkBufferPool);
+   inputGate.assignExclusiveSegments();
+   inputChannel.requestSubpartition(0);
+   handler.addInputChannel(inputChannel);
+   inputChannelId = inputChannel.getInputChannelId();
+
+   SingleInputGate releasedInputGate = createSingleInputGate(1);
+   RemoteInputChannel releasedInputChannel = new 
InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndS

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r391397108
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,339 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import 
org.apache.flink.runtime.io.network.partition.consumer.InputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.annotation.Nullable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final int NUMBER_OF_BUFFER_RESPONSES = 5;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   private InputChannelID releasedInputChannelId;
+
+   @Before
+   public void setUp() throws IOException, InterruptedException {
 
 Review comment:
   setUp ->setup?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r391396891
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,339 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import 
org.apache.flink.runtime.io.network.partition.consumer.InputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.annotation.Nullable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final int NUMBER_OF_BUFFER_RESPONSES = 5;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   private InputChannelID releasedInputChannelId;
+
+   @Before
+   public void setUp() throws IOException, InterruptedException {
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   networkBufferPool = new NetworkBufferPool(
+   NUMBER_OF_BUFFER_RESPONSES,
+   BUFFER_SIZE,
+   NUMBER_OF_BUFFER_RESPONSES);
+   channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = createRemoteInputChannel(
+   inputGate,
+   new TestingPartitionRequestClient(),
+   networkBufferPool);
+   inputGate.assignExclusiveSegments();
+   inputChannel.requestSubpartition(0);
+   handler.addInputChannel(inputChannel);
+   inputChannelId = inputChannel.getInputChannelId();
+
+   SingleInputGate releasedInputGate = createSingleInputGate(1);
+   RemoteInputChannel releasedInputChannel = new 
InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndS

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r391375251
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientSideSerializationTest.java
 ##
 @@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.BufferCompressor;
+import org.apache.flink.runtime.io.network.buffer.BufferDecompressor;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.experimental.runners.Enclosed;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Random;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.NettyMessageEncoder;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.encodeAndDecode;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Tests for the serialization and deserialization of the various {@link 
NettyMessage} sub-classes
+ * sent from server side to client side.
+ */
+@RunWith(Enclosed.class)
+public class NettyMessageClientSideSerializationTest {
+
+   /**
+* Test the serialization of {@link BufferResponse}.
+*/
+   @RunWith(Parameterized.class)
+   public static class BufferResponseTest extends 
AbstractClientSideSerializationTest {
+
+   private static final BufferCompressor COMPRESSOR = new 
BufferCompressor(BUFFER_SIZE, "LZ4");
+
+   private static final BufferDecompressor DECOMPRESSOR = new 
BufferDecompressor(BUFFER_SIZE, "LZ4");
+
+   private final Random random = new Random();
+
+   // 

+   //  parameters
+   // 

+
+   private final boolean testReadOnlyBuffer;
+
+   private final boolean testCompressedBuffer;
+
+   @Parameterized.Parameters(name = "testReadOnlyBuffer = {0}, 
testCompressedBuffer = {1}")
+   public static Collection testReadOnlyBuffer() {
+   return Arrays.asList(new Object[][] {
+   {false, false},
+   {true, false},
+   {false, true},
+   {true, true}
+ 

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r391372489
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/ByteBufUtilsTest.java
 ##
 @@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+
+/**
+ * Tests the methods in {@link ByteBufUtils}.
+ */
+public class ByteBufUtilsTest {
 
 Review comment:
   Extend `TestLogger`, also for other new added tests. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r390814434
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +458,86 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReadBufferResponseBeforeReleasingChannel() throws 
Exception {
+   testReadBufferResponseWithReleasingOrRemovingChannel(false, 
true);
+   }
+
+   @Test
+   public void testReadBufferResponseBeforeRemovingChannel() throws 
Exception {
+   testReadBufferResponseWithReleasingOrRemovingChannel(true, 
true);
+   }
+
+   @Test
+   public void testReadBufferResponseAfterReleasingChannel() throws 
Exception {
+   testReadBufferResponseWithReleasingOrRemovingChannel(false, 
false);
+   }
+
+   @Test
+   public void testReadBufferResponseAfterRemovingChannel() throws 
Exception {
+   testReadBufferResponseWithReleasingOrRemovingChannel(true, 
false);
+   }
+
+   private void testReadBufferResponseWithReleasingOrRemovingChannel(
+   boolean isRemoved,
+   boolean readBeforeReleasingOrRemoving) throws Exception {
+
+   int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = new InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndSetToGate(inputGate);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   EmbeddedChannel embeddedChannel = new EmbeddedChannel(handler);
+   handler.addInputChannel(inputChannel);
+
+   try {
+   if (readBeforeReleasingOrRemoving) {
+   // Release the channel.
+   inputGate.close();
+   if (isRemoved) {
+   
handler.removeInputChannel(inputChannel);
+   }
+   }
+
+   BufferResponse bufferResponse = createBufferResponse(
+   TestBufferFactory.createBuffer(bufferSize),
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   if (!readBeforeReleasingOrRemoving) {
+   // Release the channel.
+   inputGate.close();
+   if (isRemoved) {
+   
handler.removeInputChannel(inputChannel);
+   }
+   }
+
+   handler.channelRead(null, bufferResponse);
+
+   assertEquals(0, 
inputChannel.getNumberOfQueuedBuffers());
+   if (readBeforeReleasingOrRemoving) {
+   assertNull(bufferResponse.getBuffer());
+   } else {
+   assertNotNull(bufferResponse.getBuffer());
+   
assertTrue(bufferResponse.getBuffer().isRecycled());
+   }
+
+   embeddedChannel.runScheduledPendingTasks();
+   NettyMessage.CancelPartitionRequest 
cancelPartitionRequest = embeddedChannel.readOutbound();
+   assertNotNull(cancelPartitionRequest);
+   assertEquals(inputChannel.getInputChannelId(), 
cancelPartitionRequest.receiverId);
+   } finally {
+   releaseResource(inputGate, networkBufferPool);
 
 Review comment:
   also close `embeddedChannel`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r390795221
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +458,86 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReadBufferResponseBeforeReleasingChannel() throws 
Exception {
+   testReadBufferResponseWithReleasingOrRemovingChannel(false, 
true);
+   }
+
+   @Test
+   public void testReadBufferResponseBeforeRemovingChannel() throws 
Exception {
+   testReadBufferResponseWithReleasingOrRemovingChannel(true, 
true);
+   }
+
+   @Test
+   public void testReadBufferResponseAfterReleasingChannel() throws 
Exception {
+   testReadBufferResponseWithReleasingOrRemovingChannel(false, 
false);
+   }
+
+   @Test
+   public void testReadBufferResponseAfterRemovingChannel() throws 
Exception {
+   testReadBufferResponseWithReleasingOrRemovingChannel(true, 
false);
+   }
+
+   private void testReadBufferResponseWithReleasingOrRemovingChannel(
+   boolean isRemoved,
+   boolean readBeforeReleasingOrRemoving) throws Exception {
+
+   int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = new InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndSetToGate(inputGate);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   EmbeddedChannel embeddedChannel = new EmbeddedChannel(handler);
+   handler.addInputChannel(inputChannel);
+
+   try {
+   if (readBeforeReleasingOrRemoving) {
 
 Review comment:
   It actually reads after releasing channel. readBeforeReleasingOrRemoving -> 
!readBeforeReleasingOrRemoving


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r390787742
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessageDecoder.java
 ##
 @@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+
+import javax.annotation.Nullable;
+
+/**
+ * Base class of decoders for specified netty messages.
+ */
+abstract class NettyMessageDecoder implements AutoCloseable {
+
+   /** ID of the message under decoding. */
+   protected int msgId;
+
+   /** Length of the message under decoding. */
+   protected int messageLength;
+
+   /**
+* The result of decoding one netty buffer.
 
 Review comment:
   nit buffer -> {@link ByteBuf}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r390785778
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/ByteBufUtils.java
 ##
 @@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+
+import javax.annotation.Nullable;
+
+/**
+ * Utility routines to process Netty ByteBuf.
 
 Review comment:
   nit: {@link ByteBuf}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r390781744
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-netty-shuffle-memory-control-test/src/main/java/org/apache/flink/streaming/tests/NettyShuffleMemoryControlTestProgram.java
 ##
 @@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.tests;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
+import 
org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;
+
+import 
org.apache.flink.shaded.netty4.io.netty.util.internal.OutOfDirectMemoryError;
+
+import sun.misc.Unsafe;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * Test program to verify the direct memory consumption of Netty. Without 
zero-copy Netty
+ * may create more than one chunk, thus we may encounter {@link 
OutOfDirectMemoryError} if
+ * we limit the total direct memory to be less than two chunks. Instead, with 
zero-copy
+ * introduced in (https://issues.apache.org/jira/browse/FLINK-10742) one chunk 
will be
+ * enough and the exception will not occur.
+ *
+ * Since Netty uses low level API of {@link Unsafe} to allocate direct 
buffer when using
+ * JDK8 and these memory will not be counted in direct memory, the test is 
only effective
+ * when JDK11 is used.
+ */
+public class NettyShuffleMemoryControlTestProgram {
+   private static final int RECORD_LENGTH = 2048;
+
+   private static final ConfigOption RUNNING_TIME_IN_SECONDS = 
ConfigOptions
+   .key("test.running_time_in_seconds")
+   .defaultValue(120)
+   .withDescription("The time to run.");
+
+   private static final ConfigOption MAP_PARALLELISM = 
ConfigOptions
+   .key("test.map_parallelism")
+   .defaultValue(1)
+   .withDescription("The number of map tasks.");
+
+   private static final ConfigOption REDUCE_PARALLELISM = 
ConfigOptions
+   .key("test.reduce_parallelism")
+   .defaultValue(1)
+   .withDescription("The number of reduce tasks.");
+
+   public static void main(String[] args) throws Exception {
+   // parse the parameters
+   final ParameterTool params = ParameterTool.fromArgs(args);
+
+   final int runningTimeInSeconds = 
params.getInt(RUNNING_TIME_IN_SECONDS.key(), 
RUNNING_TIME_IN_SECONDS.defaultValue());
+   final int mapParallelism = params.getInt(MAP_PARALLELISM.key(), 
MAP_PARALLELISM.defaultValue());
+   final int reduceParallelism = 
params.getInt(REDUCE_PARALLELISM.key(), REDUCE_PARALLELISM.defaultValue());
+
+   checkArgument(runningTimeInSeconds > 0,
+   "The running time in seconds should be positive, but it 
is {}",
+   runningTimeInSeconds);
+   checkArgument(mapParallelism > 0,
+   "The number of map tasks should be positive, but it is 
{}",
+   mapParallelism);
+   checkArgument(reduceParallelism > 0,
+   "The number of reduce tasks should be positve, but it 
is {}",
+   reduceParallelism);
+
+   StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+   env.addSource(new StringSourceFunction(runningTimeInSeconds))
+   .setParallelism(mapParallelism)
+   .slotSharingGroup("a")
+   .shuffle()
+   .addSink(new DummySink())
+   .setParallelism(reduceParallelism)
+   .slotSharingGroup("b");
+
+   // execute program
+   env.execute("TaskMana

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r390779327
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-netty-shuffle-memory-control-test/pom.xml
 ##
 @@ -0,0 +1,71 @@
+
+
+http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
+
+   4.0.0
+
+   
+   org.apache.flink
+   flink-end-to-end-tests
+   1.11-SNAPSHOT
+   ..
+   
+
+   flink-netty-shuffle-memory-control-test
+   flink-netty-shuffle-memory-control-test
+   jar
+
+   
+   
+   org.apache.flink
+   
flink-streaming-java_${scala.binary.version}
+   ${project.version}
+
 
 Review comment:
   Double confirm whether it should be provided, if not, remove it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r390778751
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-netty-shuffle-memory-control-test/pom.xml
 ##
 @@ -0,0 +1,71 @@
+
+
+http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
+
+   4.0.0
+
+   
+   org.apache.flink
+   flink-end-to-end-tests
+   1.11-SNAPSHOT
+   ..
+   
+
+   flink-netty-shuffle-memory-control-test
+   flink-netty-shuffle-memory-control-test
+   jar
+
+   
+   
+   org.apache.flink
+   
flink-streaming-java_${scala.binary.version}
+   ${project.version}
+
+   
+   
+
+   
+   
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   
TaskManagerDirectMemoryTestProgram
 
 Review comment:
   NettyShuffleMemoryControlTestProgram


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-09 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389502766
 
 

 ##
 File path: 
flink-end-to-end-tests/test-scripts/test_taskmanager_direct_memory.sh
 ##
 @@ -0,0 +1,47 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+source "$(dirname "$0")"/common.sh
+
+TEST=flink-taskmanager-direct-memory-test
+TEST_PROGRAM_NAME=TaskManagerDirectMemoryTestProgram
+TEST_PROGRAM_JAR=${END_TO_END_DIR}/$TEST/target/$TEST_PROGRAM_NAME.jar
+
+set_config_key "akka.ask.timeout" "60 s"
+set_config_key "web.timeout" "6"
+
+set_config_key "taskmanager.memory.process.size" "1536m"
+
+set_config_key "taskmanager.memory.managed.size" "8" # 8Mb
+set_config_key "taskmanager.memory.network.min" "256mb"
+set_config_key "taskmanager.memory.network.max" "256mb"
+set_config_key "taskmanager.memory.jvm-metaspace.size" "64m"
+
+set_config_key "taskmanager.numberOfTaskSlots" "20" # 20 slots per TM
+set_config_key "taskmanager.network.netty.num-arenas" "1" # Use only one arena 
for each TM
 
 Review comment:
   Give some more explanations for this setting. `Set only one arena per TM for 
boosting the netty internal memory overhead`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-09 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389502766
 
 

 ##
 File path: 
flink-end-to-end-tests/test-scripts/test_taskmanager_direct_memory.sh
 ##
 @@ -0,0 +1,47 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+source "$(dirname "$0")"/common.sh
+
+TEST=flink-taskmanager-direct-memory-test
+TEST_PROGRAM_NAME=TaskManagerDirectMemoryTestProgram
+TEST_PROGRAM_JAR=${END_TO_END_DIR}/$TEST/target/$TEST_PROGRAM_NAME.jar
+
+set_config_key "akka.ask.timeout" "60 s"
+set_config_key "web.timeout" "6"
+
+set_config_key "taskmanager.memory.process.size" "1536m"
+
+set_config_key "taskmanager.memory.managed.size" "8" # 8Mb
+set_config_key "taskmanager.memory.network.min" "256mb"
+set_config_key "taskmanager.memory.network.max" "256mb"
+set_config_key "taskmanager.memory.jvm-metaspace.size" "64m"
+
+set_config_key "taskmanager.numberOfTaskSlots" "20" # 20 slots per TM
+set_config_key "taskmanager.network.netty.num-arenas" "1" # Use only one arena 
for each TM
 
 Review comment:
   Give some more explanations for this setting. `Set only one arena per TM for 
boosting the netty internal memory overhead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-09 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389502175
 
 

 ##
 File path: 
flink-end-to-end-tests/test-scripts/test_taskmanager_direct_memory.sh
 ##
 @@ -0,0 +1,47 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+source "$(dirname "$0")"/common.sh
+
+TEST=flink-taskmanager-direct-memory-test
+TEST_PROGRAM_NAME=TaskManagerDirectMemoryTestProgram
+TEST_PROGRAM_JAR=${END_TO_END_DIR}/$TEST/target/$TEST_PROGRAM_NAME.jar
+
+set_config_key "akka.ask.timeout" "60 s"
+set_config_key "web.timeout" "6"
+
+set_config_key "taskmanager.memory.process.size" "1536m"
 
 Review comment:
   The below network part only takes 512m, so we might not need so big process 
memory.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-09 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389501872
 
 

 ##
 File path: 
flink-end-to-end-tests/test-scripts/test_taskmanager_direct_memory.sh
 ##
 @@ -0,0 +1,47 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+source "$(dirname "$0")"/common.sh
+
+TEST=flink-taskmanager-direct-memory-test
+TEST_PROGRAM_NAME=TaskManagerDirectMemoryTestProgram
+TEST_PROGRAM_JAR=${END_TO_END_DIR}/$TEST/target/$TEST_PROGRAM_NAME.jar
+
+set_config_key "akka.ask.timeout" "60 s"
+set_config_key "web.timeout" "6"
+
+set_config_key "taskmanager.memory.process.size" "1536m"
+
+set_config_key "taskmanager.memory.managed.size" "8" # 8Mb
+set_config_key "taskmanager.memory.network.min" "256mb"
+set_config_key "taskmanager.memory.network.max" "256mb"
+set_config_key "taskmanager.memory.jvm-metaspace.size" "64m"
 
 Review comment:
   Why need to adjust this value?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-09 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389498145
 
 

 ##
 File path: 
flink-end-to-end-tests/test-scripts/test_taskmanager_direct_memory.sh
 ##
 @@ -0,0 +1,47 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+source "$(dirname "$0")"/common.sh
+
+TEST=flink-taskmanager-direct-memory-test
+TEST_PROGRAM_NAME=TaskManagerDirectMemoryTestProgram
+TEST_PROGRAM_JAR=${END_TO_END_DIR}/$TEST/target/$TEST_PROGRAM_NAME.jar
+
+set_config_key "akka.ask.timeout" "60 s"
+set_config_key "web.timeout" "6"
 
 Review comment:
   Do we need to change these default values? I guess in the program there are 
no special issues related to these parameters.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-09 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389496235
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-taskmanager-direct-memory-test/src/main/java/org/apache/flink/streaming/tests/TaskManagerDirectMemoryTestProgram.java
 ##
 @@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.tests;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
+import 
org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * Test program for taskmanager direct memory consumption.
+ */
+public class TaskManagerDirectMemoryTestProgram {
+   private static final ConfigOption RUNNING_TIME_IN_SECONDS = 
ConfigOptions
+   .key("test.running_time_in_seconds")
+   .defaultValue(120)
+   .withDescription("The time to run.");
+
+   private static final ConfigOption RECORD_LENGTH = ConfigOptions
+   .key("test.record_length")
+   .defaultValue(2048)
+   .withDescription("The length of record.");
+
+   private static final ConfigOption MAP_PARALLELISM = 
ConfigOptions
+   .key("test.map_parallelism")
+   .defaultValue(1)
+   .withDescription("The number of map tasks.");
+
+   private static final ConfigOption REDUCE_PARALLELISM = 
ConfigOptions
+   .key("test.reduce_parallelism")
+   .defaultValue(1)
+   .withDescription("The number of reduce tasks.");
+
+   public static void main(String[] args) throws Exception {
+   // parse the parameters
+   final ParameterTool params = ParameterTool.fromArgs(args);
+
+   final int runningTimeInSeconds = 
params.getInt(RUNNING_TIME_IN_SECONDS.key(), 
RUNNING_TIME_IN_SECONDS.defaultValue());
+   final int recordLength = params.getInt(RECORD_LENGTH.key(), 
RECORD_LENGTH.defaultValue());
+   final int mapParallelism = params.getInt(MAP_PARALLELISM.key(), 
MAP_PARALLELISM.defaultValue());
+   final int reduceParallelism = 
params.getInt(REDUCE_PARALLELISM.key(), REDUCE_PARALLELISM.defaultValue());
+
+   checkArgument(runningTimeInSeconds > 0,
+   "The running time in seconds should be positive, but it 
is {}",
+   recordLength);
+   checkArgument(recordLength > 0,
+   "The record length should be positive, but it is {}",
+   recordLength);
+   checkArgument(mapParallelism > 0,
+   "The number of map tasks should be positive, but it is 
{}",
+   mapParallelism);
+   checkArgument(reduceParallelism > 0,
+   "The number of reduce tasks should be positve, but it 
is {}",
+   reduceParallelism);
+
+   byte[] bytes = new byte[recordLength];
+   for (int i = 0; i < recordLength; ++i) {
+   bytes[i] = 'a';
+   }
+   String str = new String(bytes);
 
 Review comment:
   We can define a final `String` inside `StringSourceFunction`, no need to 
pass from outside.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-09 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389495964
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-taskmanager-direct-memory-test/src/main/java/org/apache/flink/streaming/tests/TaskManagerDirectMemoryTestProgram.java
 ##
 @@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.tests;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
+import 
org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * Test program for taskmanager direct memory consumption.
+ */
+public class TaskManagerDirectMemoryTestProgram {
+   private static final ConfigOption RUNNING_TIME_IN_SECONDS = 
ConfigOptions
+   .key("test.running_time_in_seconds")
+   .defaultValue(120)
+   .withDescription("The time to run.");
+
+   private static final ConfigOption RECORD_LENGTH = ConfigOptions
+   .key("test.record_length")
+   .defaultValue(2048)
+   .withDescription("The length of record.");
+
+   private static final ConfigOption MAP_PARALLELISM = 
ConfigOptions
+   .key("test.map_parallelism")
+   .defaultValue(1)
+   .withDescription("The number of map tasks.");
+
+   private static final ConfigOption REDUCE_PARALLELISM = 
ConfigOptions
+   .key("test.reduce_parallelism")
+   .defaultValue(1)
+   .withDescription("The number of reduce tasks.");
+
+   public static void main(String[] args) throws Exception {
+   // parse the parameters
+   final ParameterTool params = ParameterTool.fromArgs(args);
+
+   final int runningTimeInSeconds = 
params.getInt(RUNNING_TIME_IN_SECONDS.key(), 
RUNNING_TIME_IN_SECONDS.defaultValue());
+   final int recordLength = params.getInt(RECORD_LENGTH.key(), 
RECORD_LENGTH.defaultValue());
+   final int mapParallelism = params.getInt(MAP_PARALLELISM.key(), 
MAP_PARALLELISM.defaultValue());
+   final int reduceParallelism = 
params.getInt(REDUCE_PARALLELISM.key(), REDUCE_PARALLELISM.defaultValue());
 
 Review comment:
   It might not make sense to adjust these parameters by any values, otherwise 
we can mock the specific scenario to exceed the netty memory overhead. Better 
to make them static in program.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-09 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389491654
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-taskmanager-direct-memory-test/src/main/java/org/apache/flink/streaming/tests/TaskManagerDirectMemoryTestProgram.java
 ##
 @@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.tests;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
+import 
org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * Test program for taskmanager direct memory consumption.
 
 Review comment:
   Give a more specific description that it is mainly indicating the netty 
memory overhead during data network shuffle. Also add the respective jira link 
for establishing the relationship.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-09 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389491654
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-taskmanager-direct-memory-test/src/main/java/org/apache/flink/streaming/tests/TaskManagerDirectMemoryTestProgram.java
 ##
 @@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.tests;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
+import 
org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * Test program for taskmanager direct memory consumption.
 
 Review comment:
   Give a more specific description that it is mainly indicating the netty 
memory overhead during data network shuffle. Also link the respective jira link 
for establishing the relationship.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-09 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389491807
 
 

 ##
 File path: flink-end-to-end-tests/pom.xml
 ##
 @@ -88,6 +88,7 @@ under the License.
flink-elasticsearch7-test
flink-end-to-end-tests-common-kafka
flink-tpcds-test
+   flink-taskmanager-direct-memory-test
 
 Review comment:
   flink-netty-shuffle-memory-control-test


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-09 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389491436
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-taskmanager-direct-memory-test/src/main/java/org/apache/flink/streaming/tests/TaskManagerDirectMemoryTestProgram.java
 ##
 @@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.tests;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
+import 
org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * Test program for taskmanager direct memory consumption.
+ */
+public class TaskManagerDirectMemoryTestProgram {
 
 Review comment:
   `TaskManagerDirectMemory` has too wide scope and not indicating the precise 
motivation. `NettyShuffleMemoryControlTestProgram` instead 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-09 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389491654
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-taskmanager-direct-memory-test/src/main/java/org/apache/flink/streaming/tests/TaskManagerDirectMemoryTestProgram.java
 ##
 @@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.tests;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
+import 
org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * Test program for taskmanager direct memory consumption.
 
 Review comment:
   Give a more specific description that it is mainly indicating the netty 
memory overhead during data network shuffle.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389463393
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientSideSerializationTest.java
 ##
 @@ -0,0 +1,221 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.BufferCompressor;
+import org.apache.flink.runtime.io.network.buffer.BufferDecompressor;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Random;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.encodeAndDecode;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Tests for the serialization and deserialization of the various {@link 
NettyMessage} sub-classes
+ * sent from server side to client side.
+ */
+@RunWith(Parameterized.class)
+public class NettyMessageClientSideSerializationTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final BufferCompressor COMPRESSOR = new 
BufferCompressor(BUFFER_SIZE, "LZ4");
+
+   private static final BufferDecompressor DECOMPRESSOR = new 
BufferDecompressor(BUFFER_SIZE, "LZ4");
+
+   private final Random random = new Random();
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   // 

+   //  parameters
+   // 

+
+   private final boolean testReadOnlyBuffer;
+
+   private final boolean testCompressedBuffer;
+
+   @Parameterized.Parameters(name = "testReadOnlyBuffer = {0}, 
testCompressedBuffer = {1}")
+   public static Collection testReadOnlyBuffer() {
+   return Arrays.asList(new Object[][] {
+   {false, false},
+   {true, false},
+   {false, true},
+   {true, true}
+   });
+   }
+
+   public NettyMessageClientSideSerializationTest(boolean 
testReadOnlyBuffer, boolean testCompressedBuffer) {
+   this.testReadOnlyBuffer = testReadOnlyBuffer;
+   this.testCompressedBuffer = testCompressedBuffer;
+   }
+
+   @Before
+   public void setup() throws IOException, InterruptedException {
+   networkBufferPool = new NetworkB

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389462959
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientSideSerializationTest.java
 ##
 @@ -0,0 +1,221 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.BufferCompressor;
+import org.apache.flink.runtime.io.network.buffer.BufferDecompressor;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Random;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.encodeAndDecode;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Tests for the serialization and deserialization of the various {@link 
NettyMessage} sub-classes
+ * sent from server side to client side.
+ */
+@RunWith(Parameterized.class)
+public class NettyMessageClientSideSerializationTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final BufferCompressor COMPRESSOR = new 
BufferCompressor(BUFFER_SIZE, "LZ4");
+
+   private static final BufferDecompressor DECOMPRESSOR = new 
BufferDecompressor(BUFFER_SIZE, "LZ4");
+
+   private final Random random = new Random();
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   // 

+   //  parameters
+   // 

+
+   private final boolean testReadOnlyBuffer;
+
+   private final boolean testCompressedBuffer;
+
+   @Parameterized.Parameters(name = "testReadOnlyBuffer = {0}, 
testCompressedBuffer = {1}")
+   public static Collection testReadOnlyBuffer() {
+   return Arrays.asList(new Object[][] {
+   {false, false},
+   {true, false},
+   {false, true},
+   {true, true}
+   });
+   }
+
+   public NettyMessageClientSideSerializationTest(boolean 
testReadOnlyBuffer, boolean testCompressedBuffer) {
+   this.testReadOnlyBuffer = testReadOnlyBuffer;
+   this.testCompressedBuffer = testCompressedBuffer;
+   }
+
+   @Before
+   public void setup() throws IOException, InterruptedException {
+   networkBufferPool = new NetworkB

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389462914
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientSideSerializationTest.java
 ##
 @@ -0,0 +1,221 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.BufferCompressor;
+import org.apache.flink.runtime.io.network.buffer.BufferDecompressor;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Random;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.encodeAndDecode;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Tests for the serialization and deserialization of the various {@link 
NettyMessage} sub-classes
+ * sent from server side to client side.
+ */
+@RunWith(Parameterized.class)
+public class NettyMessageClientSideSerializationTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final BufferCompressor COMPRESSOR = new 
BufferCompressor(BUFFER_SIZE, "LZ4");
+
+   private static final BufferDecompressor DECOMPRESSOR = new 
BufferDecompressor(BUFFER_SIZE, "LZ4");
+
+   private final Random random = new Random();
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   // 

+   //  parameters
+   // 

+
+   private final boolean testReadOnlyBuffer;
+
+   private final boolean testCompressedBuffer;
+
+   @Parameterized.Parameters(name = "testReadOnlyBuffer = {0}, 
testCompressedBuffer = {1}")
+   public static Collection testReadOnlyBuffer() {
+   return Arrays.asList(new Object[][] {
+   {false, false},
+   {true, false},
+   {false, true},
+   {true, true}
+   });
+   }
+
+   public NettyMessageClientSideSerializationTest(boolean 
testReadOnlyBuffer, boolean testCompressedBuffer) {
+   this.testReadOnlyBuffer = testReadOnlyBuffer;
+   this.testCompressedBuffer = testCompressedBuffer;
+   }
+
+   @Before
+   public void setup() throws IOException, InterruptedException {
+   networkBufferPool = new NetworkB

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389460599
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyTestUtil.java
 ##
 @@ -162,6 +168,37 @@ static NettyConfig createConfig(int segmentSize, 
Configuration config) throws Ex
config);
}
 
+   // 
-
+   // Encoding & Decoding
+   // 
-
+
+   @SuppressWarnings("unchecked")
 
 Review comment:
   this can also be removed together with 
https://github.com/apache/flink/pull/7368/files#r389460526


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389460526
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyTestUtil.java
 ##
 @@ -162,6 +168,37 @@ static NettyConfig createConfig(int segmentSize, 
Configuration config) throws Ex
config);
}
 
+   // 
-
+   // Encoding & Decoding
+   // 
-
+
+   @SuppressWarnings("unchecked")
+   static  T encodeAndDecode(T msg, 
EmbeddedChannel channel) {
+   channel.writeOutbound(msg);
+   ByteBuf encoded = channel.readOutbound();
+
+   assertTrue(channel.writeInbound(encoded));
+
+   return (T) channel.readInbound();
 
 Review comment:
   not need this transformation of `(T)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389459880
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageServerSideSerializationTest.java
 ##
 @@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.runtime.event.task.IntegerTaskEvent;
+import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.util.Random;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.encodeAndDecode;
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Tests for the serialization and deserialization of the various {@link 
NettyMessage} sub-classes
+ * sent from client side to server side.
+ */
+public class NettyMessageServerSideSerializationTest {
+
+   private final Random random = new Random();
+
+   private EmbeddedChannel channel;
+
+   @Before
+   public void setup() {
+   channel = new EmbeddedChannel(
+   new NettyMessage.NettyMessageEncoder(), // For outbound 
messages
+   new NettyMessage.NettyMessageDecoder()); // For inbound 
messages
+   }
+
+   @After
+   public void tearDown() {
+   channel.close();
+   }
+
+   @Test
+   public void testPartitionRequest() {
+   NettyMessage.PartitionRequest expected = new 
NettyMessage.PartitionRequest(
+   new ResultPartitionID(),
+   random.nextInt(),
+   new InputChannelID(),
+   random.nextInt());
+
+   NettyMessage.PartitionRequest actual = 
encodeAndDecode(expected, channel);
+
+   assertEquals(expected.partitionId, actual.partitionId);
+   assertEquals(expected.queueIndex, actual.queueIndex);
+   assertEquals(expected.receiverId, actual.receiverId);
+   assertEquals(expected.credit, actual.credit);
+   }
+
+   @Test
+   public void testTaskEventRequest() {
+   NettyMessage.TaskEventRequest expected = new 
NettyMessage.TaskEventRequest(new IntegerTaskEvent(random.nextInt()), new 
ResultPartitionID(), new InputChannelID());
 
 Review comment:
   Long line, split line for every argument.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389459934
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageServerSideSerializationTest.java
 ##
 @@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.runtime.event.task.IntegerTaskEvent;
+import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.util.Random;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.encodeAndDecode;
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Tests for the serialization and deserialization of the various {@link 
NettyMessage} sub-classes
+ * sent from client side to server side.
+ */
+public class NettyMessageServerSideSerializationTest {
+
+   private final Random random = new Random();
+
+   private EmbeddedChannel channel;
+
+   @Before
+   public void setup() {
+   channel = new EmbeddedChannel(
+   new NettyMessage.NettyMessageEncoder(), // For outbound 
messages
+   new NettyMessage.NettyMessageDecoder()); // For inbound 
messages
+   }
+
+   @After
+   public void tearDown() {
+   channel.close();
+   }
+
+   @Test
+   public void testPartitionRequest() {
+   NettyMessage.PartitionRequest expected = new 
NettyMessage.PartitionRequest(
+   new ResultPartitionID(),
+   random.nextInt(),
+   new InputChannelID(),
+   random.nextInt());
+
+   NettyMessage.PartitionRequest actual = 
encodeAndDecode(expected, channel);
+
+   assertEquals(expected.partitionId, actual.partitionId);
+   assertEquals(expected.queueIndex, actual.queueIndex);
+   assertEquals(expected.receiverId, actual.receiverId);
+   assertEquals(expected.credit, actual.credit);
+   }
+
+   @Test
+   public void testTaskEventRequest() {
+   NettyMessage.TaskEventRequest expected = new 
NettyMessage.TaskEventRequest(new IntegerTaskEvent(random.nextInt()), new 
ResultPartitionID(), new InputChannelID());
+   NettyMessage.TaskEventRequest actual = 
encodeAndDecode(expected, channel);
+
+   assertEquals(expected.event, actual.event);
+   assertEquals(expected.partitionId, actual.partitionId);
+   assertEquals(expected.receiverId, actual.receiverId);
+   }
+
+   @Test
+   public void testCancelPartitionRequest() {
+   NettyMessage.CancelPartitionRequest expected = new 
NettyMessage.CancelPartitionRequest(new InputChannelID());
+   NettyMessage.CancelPartitionRequest actual = 
encodeAndDecode(expected, channel);
+
+   assertEquals(expected.receiverId, actual.receiverId);
+   }
+
+   @Test
+   public void testCloseRequest() {
+   NettyMessage.CloseRequest expected = new 
NettyMessage.CloseRequest();
+   NettyMessage.CloseRequest actual = encodeAndDecode(expected, 
channel);
+
+   assertEquals(expected.getClass(), actual.getClass());
+   }
+
+   @Test
+   public void testAddCredit() {
+   NettyMessage.AddCredit expected = new 
NettyMessage.AddCredit(random.nextInt(Integer.MAX_VALUE) + 1, new 
InputChannelID());
 
 Review comment:
   ditto


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apach

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389459908
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageServerSideSerializationTest.java
 ##
 @@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.runtime.event.task.IntegerTaskEvent;
+import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.util.Random;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.encodeAndDecode;
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Tests for the serialization and deserialization of the various {@link 
NettyMessage} sub-classes
+ * sent from client side to server side.
+ */
+public class NettyMessageServerSideSerializationTest {
+
+   private final Random random = new Random();
+
+   private EmbeddedChannel channel;
+
+   @Before
+   public void setup() {
+   channel = new EmbeddedChannel(
+   new NettyMessage.NettyMessageEncoder(), // For outbound 
messages
+   new NettyMessage.NettyMessageDecoder()); // For inbound 
messages
+   }
+
+   @After
+   public void tearDown() {
+   channel.close();
+   }
+
+   @Test
+   public void testPartitionRequest() {
+   NettyMessage.PartitionRequest expected = new 
NettyMessage.PartitionRequest(
+   new ResultPartitionID(),
+   random.nextInt(),
+   new InputChannelID(),
+   random.nextInt());
+
+   NettyMessage.PartitionRequest actual = 
encodeAndDecode(expected, channel);
+
+   assertEquals(expected.partitionId, actual.partitionId);
+   assertEquals(expected.queueIndex, actual.queueIndex);
+   assertEquals(expected.receiverId, actual.receiverId);
+   assertEquals(expected.credit, actual.credit);
+   }
+
+   @Test
+   public void testTaskEventRequest() {
+   NettyMessage.TaskEventRequest expected = new 
NettyMessage.TaskEventRequest(new IntegerTaskEvent(random.nextInt()), new 
ResultPartitionID(), new InputChannelID());
+   NettyMessage.TaskEventRequest actual = 
encodeAndDecode(expected, channel);
+
+   assertEquals(expected.event, actual.event);
+   assertEquals(expected.partitionId, actual.partitionId);
+   assertEquals(expected.receiverId, actual.receiverId);
+   }
+
+   @Test
+   public void testCancelPartitionRequest() {
+   NettyMessage.CancelPartitionRequest expected = new 
NettyMessage.CancelPartitionRequest(new InputChannelID());
 
 Review comment:
   ditto


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389357061
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -443,10 +587,13 @@ private static BufferResponse createBufferResponse(
// Skip general header bytes
serialized.readBytes(NettyMessage.FRAME_HEADER_LENGTH);
 
-   // Deserialize the bytes again. We have to go this way, because 
we only partly deserialize
-   // the header of the response and wait for a buffer from the 
buffer pool to copy the payload
-   // data into.
-   BufferResponse deserialized = 
BufferResponse.readFrom(serialized);
+   // Deserialize the bytes again. We have to go this way to 
ensure the data buffer part
+   // is consistent with the input channel sent to.
 
 Review comment:
   I think the previous comment makes more sense. `because we only partly 
deserialize the deader of the response and wait for a buffer from the allocator 
to copy the payload data into`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389357061
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -443,10 +587,13 @@ private static BufferResponse createBufferResponse(
// Skip general header bytes
serialized.readBytes(NettyMessage.FRAME_HEADER_LENGTH);
 
-   // Deserialize the bytes again. We have to go this way, because 
we only partly deserialize
-   // the header of the response and wait for a buffer from the 
buffer pool to copy the payload
-   // data into.
-   BufferResponse deserialized = 
BufferResponse.readFrom(serialized);
+   // Deserialize the bytes again. We have to go this way to 
ensure the data buffer part
+   // is consistent with the input channel sent to.
 
 Review comment:
   I think the previous comment makes more sense. `because we only partly 
deserialize the deader of the response and wait for a buffer from the allocator 
to copy the payload data into.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389356875
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -430,23 +431,35 @@ private static void releaseResource(SingleInputGate 
inputGate, NetworkBufferPool
/**
 * Returns a deserialized buffer message as it would be received during 
runtime.
 */
-   private static BufferResponse createBufferResponse(
+   private BufferResponse createBufferResponse(
Buffer buffer,
int sequenceNumber,
-   InputChannelID receivingChannelId,
-   int backlog) throws IOException {
+   RemoteInputChannel receivingChannel,
+   int backlog,
+   CreditBasedPartitionRequestClientHandler clientHandler) 
throws IOException {
+
// Mock buffer to serialize
-   BufferResponse resp = new BufferResponse(buffer, 
sequenceNumber, receivingChannelId, backlog);
+   BufferResponse resp = new BufferResponse(
+   buffer,
+   sequenceNumber,
+   receivingChannel.getInputChannelId(),
+   backlog);
 
ByteBuf serialized = 
resp.write(UnpooledByteBufAllocator.DEFAULT);
 
// Skip general header bytes
serialized.readBytes(NettyMessage.FRAME_HEADER_LENGTH);
 
+
// Deserialize the bytes again. We have to go this way, because 
we only partly deserialize
// the header of the response and wait for a buffer from the 
buffer pool to copy the payload
// data into.
-   BufferResponse deserialized = 
BufferResponse.readFrom(serialized);
+   NetworkBufferAllocator allocator = new 
NetworkBufferAllocator(clientHandler);
+   BufferResponse deserialized = 
BufferResponse.readFrom(serialized, allocator);
+
+   if (deserialized.getBuffer() != null) {
 
 Review comment:
   It might be the case as you said, but it is not related to this PR 
motivation. So I suggest not making the irrelevant change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389356547
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +458,110 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReadBufferResponseBeforeReleasingChannel() throws 
Exception {
+   testReadBufferResponseBeforeReleasingOrRemovingChannel(false);
+   }
+
+   @Test
+   public void testReadBufferResponseBeforeRemovingChannel() throws 
Exception {
+   testReadBufferResponseBeforeReleasingOrRemovingChannel(true);
+   }
+
+   @Test
+   public void testReadBufferResponseAfterReleasingChannel() throws 
Exception {
+   testReadBufferResponseAfterReleasingAndRemovingChannel(false);
+   }
+
+   @Test
+   public void testReadBufferResponseAfterRemovingChannel() throws 
Exception {
+   testReadBufferResponseAfterReleasingAndRemovingChannel(true);
+   }
+
+   private void 
testReadBufferResponseBeforeReleasingOrRemovingChannel(boolean isRemoved) 
throws Exception {
+   int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = new InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndSetToGate(inputGate);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   EmbeddedChannel embeddedChannel = new EmbeddedChannel(handler);
+   handler.addInputChannel(inputChannel);
+
+   try {
+   BufferResponse bufferResponse = createBufferResponse(
+   TestBufferFactory.createBuffer(bufferSize),
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   // Release the channel.
+   inputGate.close();
+   if (isRemoved) {
+   handler.removeInputChannel(inputChannel);
+   }
+
+   handler.channelRead(null, bufferResponse);
+
+   assertEquals(0, 
inputChannel.getNumberOfQueuedBuffers());
+   assertNotNull(bufferResponse.getBuffer());
+   assertTrue(bufferResponse.getBuffer().isRecycled());
+
+   embeddedChannel.runScheduledPendingTasks();
+   NettyMessage.CancelPartitionRequest 
cancelPartitionRequest = embeddedChannel.readOutbound();
+   assertNotNull(cancelPartitionRequest);
+   assertEquals(inputChannel.getInputChannelId(), 
cancelPartitionRequest.receiverId);
+   } finally {
+   releaseResource(inputGate, networkBufferPool);
+   }
+   }
+
+   private void 
testReadBufferResponseAfterReleasingAndRemovingChannel(boolean isRemoved) 
throws Exception {
 
 Review comment:
   It is better to deduplicate this method with above 
`testReadBufferResponseBeforeReleasingOrRemovingChannel`, because there are 
almost the same codes except the sequence to trigger release.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389355703
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +458,110 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReadBufferResponseBeforeReleasingChannel() throws 
Exception {
+   testReadBufferResponseBeforeReleasingOrRemovingChannel(false);
+   }
+
+   @Test
+   public void testReadBufferResponseBeforeRemovingChannel() throws 
Exception {
+   testReadBufferResponseBeforeReleasingOrRemovingChannel(true);
+   }
+
+   @Test
+   public void testReadBufferResponseAfterReleasingChannel() throws 
Exception {
+   testReadBufferResponseAfterReleasingAndRemovingChannel(false);
+   }
+
+   @Test
+   public void testReadBufferResponseAfterRemovingChannel() throws 
Exception {
+   testReadBufferResponseAfterReleasingAndRemovingChannel(true);
+   }
+
+   private void 
testReadBufferResponseBeforeReleasingOrRemovingChannel(boolean isRemoved) 
throws Exception {
+   int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = new InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndSetToGate(inputGate);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   EmbeddedChannel embeddedChannel = new EmbeddedChannel(handler);
+   handler.addInputChannel(inputChannel);
+
+   try {
+   BufferResponse bufferResponse = createBufferResponse(
+   TestBufferFactory.createBuffer(bufferSize),
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   // Release the channel.
+   inputGate.close();
+   if (isRemoved) {
+   handler.removeInputChannel(inputChannel);
+   }
+
+   handler.channelRead(null, bufferResponse);
+
+   assertEquals(0, 
inputChannel.getNumberOfQueuedBuffers());
+   assertNotNull(bufferResponse.getBuffer());
+   assertTrue(bufferResponse.getBuffer().isRecycled());
+
+   embeddedChannel.runScheduledPendingTasks();
+   NettyMessage.CancelPartitionRequest 
cancelPartitionRequest = embeddedChannel.readOutbound();
+   assertNotNull(cancelPartitionRequest);
+   assertEquals(inputChannel.getInputChannelId(), 
cancelPartitionRequest.receiverId);
+   } finally {
+   releaseResource(inputGate, networkBufferPool);
+   }
+   }
+
+   private void 
testReadBufferResponseAfterReleasingAndRemovingChannel(boolean isRemoved) 
throws Exception {
 
 Review comment:
   ReleasingAndRemoving -> ReleasingOrRemoving


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389353353
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegate.java
 ##
 @@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.runtime.io.network.NetworkClientHandler;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter;
+import org.apache.flink.util.IOUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.FRAME_HEADER_LENGTH;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.MAGIC_NUMBER;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Decodes messages from the received netty buffers. This decoder assumes the
+ * messages have the following format:
+ * +---++
+ * | FRAME_HEADER ||  MESSAGE_HEADER   | DATA BUFFER (Optional) |
+ * +---++
+ *
+ * This decoder decodes the frame header and delegates the following work 
to the
+ * corresponding message decoders according to the message type. During this 
process
+ * The frame header and message header are only accumulated if they span  
received
+ * multiple netty buffers, and the data buffer is copied directly to the buffer
+ * of corresponding input channel to avoid more copying.
+ *
+ * The format of the frame header is
+ * +--+--++
+ * | FRAME LENGTH (4) | MAGIC NUMBER (4) | ID (1) |
+ * +--+--++
+ */
+public class NettyMessageClientDecoderDelegate extends 
ChannelInboundHandlerAdapter {
+   private final Logger LOG = 
LoggerFactory.getLogger(NettyMessageClientDecoderDelegate.class);
+
+   /** The decoder for BufferResponse. */
+private final NettyMessageDecoder bufferResponseDecoder;
+
+/** The decoder for messages other than BufferResponse. */
+   private final NettyMessageDecoder nonBufferResponseDecoder;
+
+   /** The accumulation buffer for the frame header. */
+   private ByteBuf frameHeaderBuffer;
+
+   /** The decoder for the current message. It is null if we are decoding 
the frame header. */
+   private NettyMessageDecoder currentDecoder;
+
+NettyMessageClientDecoderDelegate(NetworkClientHandler 
networkClientHandler) {
+   this.bufferResponseDecoder = new BufferResponseDecoder(
+   new NetworkBufferAllocator(
+   checkNotNull(networkClientHandler)));
+this.nonBufferResponseDecoder = new NonBufferResponseDecoder();
+}
+
+@Override
+public void channelActive(ChannelHandlerContext ctx) throws Exception {
+bufferResponseDecoder.onChannelActive(ctx);
+nonBufferResponseDecoder.onChannelActive(ctx);
+
+   frameHeaderBuffer = 
ctx.alloc().directBuffer(FRAME_HEADER_LENGTH);
+
+   super.channelActive(ctx);
+}
+
+   /**
+* Releases resources when the channel is closed. When exceptions are 
thrown during
+* processing received netty buffers, {@link 
CreditBasedPartitionRequestClientHandler}
+* is expected to catch the exception and close the channel and trigger 
this notification.
+*
+* @param ctx The context of the channel close notification.
+*/
+   @Override
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+   IOUtils.cleanup(LOG, bufferResponseDecoder, 
nonBufferResponseDecoder);
+   frameHeaderBuffer.release();
+
+   super.channelInactive(ctx);
+}
+
+@O

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389354887
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/BufferResponseDecoder.java
 ##
 @@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+
+import javax.annotation.Nullable;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse.MESSAGE_HEADER_LENGTH;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * The decoder for {@link BufferResponse}.
+ */
+class BufferResponseDecoder extends NettyMessageDecoder {
+
+   /** The Buffer allocator. */
+   private final NetworkBufferAllocator allocator;
+
+   /** The accumulation buffer of message header. */
+   private ByteBuf messageHeaderBuffer;
+
+   /**
+* The BufferResponse message that has its message header decoded, but 
still
+* not received all the bytes of the buffer part.
+*/
+   @Nullable
+   private BufferResponse bufferResponse;
+
+   /** How many bytes have been received or discarded for the data buffer 
part. */
+   private int decodedDataBufferSize;
+
+   BufferResponseDecoder(NetworkBufferAllocator allocator) {
+   this.allocator = checkNotNull(allocator);
+   }
+
+   @Override
+   public void onChannelActive(ChannelHandlerContext ctx) {
+   messageHeaderBuffer = 
ctx.alloc().directBuffer(MESSAGE_HEADER_LENGTH);
+   }
+
+   @Override
+   public DecodingResult onChannelRead(ByteBuf data) throws Exception {
+   if (bufferResponse == null) {
+   extractMessageHeader(data);
+   }
+
+   if (bufferResponse != null) {
+   int remainingBufferSize = bufferResponse.bufferSize - 
decodedDataBufferSize;
+   int actualBytesToDecode = 
Math.min(data.readableBytes(), remainingBufferSize);
+
+   // For the case of data buffer really exists in 
BufferResponse now.
+   if (actualBytesToDecode > 0) {
+   // For the case of released input channel, the 
respective data buffer part would be
+   // discarded from the received buffer.
+   if (bufferResponse.getBuffer() == null) {
+   data.readerIndex(data.readerIndex() + 
actualBytesToDecode);
+   } else {
+   
bufferResponse.getBuffer().asByteBuf().writeBytes(data, actualBytesToDecode);
+   }
+
+   decodedDataBufferSize += actualBytesToDecode;
+   }
+
+   if (decodedDataBufferSize == bufferResponse.bufferSize) 
{
+   BufferResponse result = bufferResponse;
+   clearState();
+   return DecodingResult.fullMessage(result);
+   }
+   }
+
+   return DecodingResult.NOT_FINISHED;
+   }
+
+   private void extractMessageHeader(ByteBuf data) {
 
 Review comment:
   extractMessageHeader -> decodeMessageHeader


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389354841
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/BufferResponseDecoder.java
 ##
 @@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+
+import javax.annotation.Nullable;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse.MESSAGE_HEADER_LENGTH;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * The decoder for {@link BufferResponse}.
+ */
+class BufferResponseDecoder extends NettyMessageDecoder {
+
+   /** The Buffer allocator. */
+   private final NetworkBufferAllocator allocator;
+
+   /** The accumulation buffer of message header. */
+   private ByteBuf messageHeaderBuffer;
+
+   /**
+* The BufferResponse message that has its message header decoded, but 
still
+* not received all the bytes of the buffer part.
+*/
+   @Nullable
+   private BufferResponse bufferResponse;
+
+   /** How many bytes have been received or discarded for the data buffer 
part. */
+   private int decodedDataBufferSize;
+
+   BufferResponseDecoder(NetworkBufferAllocator allocator) {
+   this.allocator = checkNotNull(allocator);
+   }
+
+   @Override
+   public void onChannelActive(ChannelHandlerContext ctx) {
+   messageHeaderBuffer = 
ctx.alloc().directBuffer(MESSAGE_HEADER_LENGTH);
+   }
+
+   @Override
+   public DecodingResult onChannelRead(ByteBuf data) throws Exception {
+   if (bufferResponse == null) {
+   extractMessageHeader(data);
+   }
+
+   if (bufferResponse != null) {
+   int remainingBufferSize = bufferResponse.bufferSize - 
decodedDataBufferSize;
+   int actualBytesToDecode = 
Math.min(data.readableBytes(), remainingBufferSize);
+
+   // For the case of data buffer really exists in 
BufferResponse now.
+   if (actualBytesToDecode > 0) {
+   // For the case of released input channel, the 
respective data buffer part would be
+   // discarded from the received buffer.
+   if (bufferResponse.getBuffer() == null) {
+   data.readerIndex(data.readerIndex() + 
actualBytesToDecode);
+   } else {
+   
bufferResponse.getBuffer().asByteBuf().writeBytes(data, actualBytesToDecode);
+   }
+
+   decodedDataBufferSize += actualBytesToDecode;
+   }
+
+   if (decodedDataBufferSize == bufferResponse.bufferSize) 
{
+   BufferResponse result = bufferResponse;
+   clearState();
+   return DecodingResult.fullMessage(result);
+   }
+   }
+
+   return DecodingResult.NOT_FINISHED;
+   }
+
+   private void extractMessageHeader(ByteBuf data) {
+   ByteBuf toDecode = ByteBufUtils.accumulate(
 
 Review comment:
   toDecode -> fullHeaderBuf


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389354419
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NonBufferResponseDecoder.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+
+import java.net.ProtocolException;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+
+/**
+ * The decoder for messages other than {@link BufferResponse}.
+ */
+class NonBufferResponseDecoder extends NettyMessageDecoder {
+
+   /** The initial size of the message header accumulation buffer. */
+   private static final int INITIAL_MESSAGE_HEADER_BUFFER_LENGTH = 128;
+
+   /** The accumulation buffer of the message header. */
+   private ByteBuf messageBuffer;
+
+   @Override
+   public void onChannelActive(ChannelHandlerContext ctx) {
+   messageBuffer = 
ctx.alloc().directBuffer(INITIAL_MESSAGE_HEADER_BUFFER_LENGTH);
+   }
+
+   @Override
+   void onNewMessageReceived(int msgId, int messageLength) {
+   super.onNewMessageReceived(msgId, messageLength);
+   ensureBufferCapacity();
+   }
+
+   @Override
+   public DecodingResult onChannelRead(ByteBuf data) throws Exception {
+   ByteBuf toDecode = ByteBufUtils.accumulate(
+   messageBuffer,
+   data,
+   messageLength,
+   messageBuffer.readableBytes());
+   if (toDecode == null) {
+   return DecodingResult.NOT_FINISHED;
+   }
+
+   NettyMessage nettyMessage;
+   switch (msgId) {
+   case ErrorResponse.ID:
+   nettyMessage = ErrorResponse.readFrom(toDecode);
+   break;
+   default:
+   throw new ProtocolException("Received unknown 
message from producer: " + msgId);
+   }
+
+   messageBuffer.clear();
+   return DecodingResult.fullMessage(nettyMessage);
+   }
+
+   /**
+* Ensures the message header accumulation buffer has enough capacity 
for
+* the current message.
+*/
+   private void ensureBufferCapacity() {
+   if (messageBuffer.writerIndex() == 0 && 
messageBuffer.capacity() < messageLength) {
 
 Review comment:
   `messageBuffer.writerIndex() == 0` should not be in the condition. If this 
condition is not satisfied, we should throw exception here, not only mute for 
capacity. So you can add the `assert(messageBuffer.writerIndex() == 0)` if you 
want.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389354238
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NonBufferResponseDecoder.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+
+import java.net.ProtocolException;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+
+/**
+ * The decoder for messages other than {@link BufferResponse}.
+ */
+class NonBufferResponseDecoder extends NettyMessageDecoder {
+
+   /** The initial size of the message header accumulation buffer. */
+   private static final int INITIAL_MESSAGE_HEADER_BUFFER_LENGTH = 128;
+
+   /** The accumulation buffer of the message header. */
 
 Review comment:
   Clarify whether this buffer is only for message header or whole message. If 
for message header, the below messageBuffer  -> messageHeaderBuf, otherwise we 
should remove `header` from the comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389353944
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NonBufferResponseDecoder.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+
+import java.net.ProtocolException;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+
+/**
+ * The decoder for messages other than {@link BufferResponse}.
+ */
+class NonBufferResponseDecoder extends NettyMessageDecoder {
+
+   /** The initial size of the message header accumulation buffer. */
+   private static final int INITIAL_MESSAGE_HEADER_BUFFER_LENGTH = 128;
+
+   /** The accumulation buffer of the message header. */
+   private ByteBuf messageBuffer;
 
 Review comment:
   messageBuffer  -> messageHeaderBuf


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389353944
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NonBufferResponseDecoder.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+
+import java.net.ProtocolException;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+
+/**
+ * The decoder for messages other than {@link BufferResponse}.
+ */
+class NonBufferResponseDecoder extends NettyMessageDecoder {
+
+   /** The initial size of the message header accumulation buffer. */
+   private static final int INITIAL_MESSAGE_HEADER_BUFFER_LENGTH = 128;
+
+   /** The accumulation buffer of the message header. */
+   private ByteBuf messageBuffer;
 
 Review comment:
   messageBuffer  -> messageHeaderBuf


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389353353
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegate.java
 ##
 @@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.runtime.io.network.NetworkClientHandler;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter;
+import org.apache.flink.util.IOUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.FRAME_HEADER_LENGTH;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.MAGIC_NUMBER;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Decodes messages from the received netty buffers. This decoder assumes the
+ * messages have the following format:
+ * +---++
+ * | FRAME_HEADER ||  MESSAGE_HEADER   | DATA BUFFER (Optional) |
+ * +---++
+ *
+ * This decoder decodes the frame header and delegates the following work 
to the
+ * corresponding message decoders according to the message type. During this 
process
+ * The frame header and message header are only accumulated if they span  
received
+ * multiple netty buffers, and the data buffer is copied directly to the buffer
+ * of corresponding input channel to avoid more copying.
+ *
+ * The format of the frame header is
+ * +--+--++
+ * | FRAME LENGTH (4) | MAGIC NUMBER (4) | ID (1) |
+ * +--+--++
+ */
+public class NettyMessageClientDecoderDelegate extends 
ChannelInboundHandlerAdapter {
+   private final Logger LOG = 
LoggerFactory.getLogger(NettyMessageClientDecoderDelegate.class);
+
+   /** The decoder for BufferResponse. */
+private final NettyMessageDecoder bufferResponseDecoder;
+
+/** The decoder for messages other than BufferResponse. */
+   private final NettyMessageDecoder nonBufferResponseDecoder;
+
+   /** The accumulation buffer for the frame header. */
+   private ByteBuf frameHeaderBuffer;
+
+   /** The decoder for the current message. It is null if we are decoding 
the frame header. */
+   private NettyMessageDecoder currentDecoder;
+
+NettyMessageClientDecoderDelegate(NetworkClientHandler 
networkClientHandler) {
+   this.bufferResponseDecoder = new BufferResponseDecoder(
+   new NetworkBufferAllocator(
+   checkNotNull(networkClientHandler)));
+this.nonBufferResponseDecoder = new NonBufferResponseDecoder();
+}
+
+@Override
+public void channelActive(ChannelHandlerContext ctx) throws Exception {
+bufferResponseDecoder.onChannelActive(ctx);
+nonBufferResponseDecoder.onChannelActive(ctx);
+
+   frameHeaderBuffer = 
ctx.alloc().directBuffer(FRAME_HEADER_LENGTH);
+
+   super.channelActive(ctx);
+}
+
+   /**
+* Releases resources when the channel is closed. When exceptions are 
thrown during
+* processing received netty buffers, {@link 
CreditBasedPartitionRequestClientHandler}
+* is expected to catch the exception and close the channel and trigger 
this notification.
+*
+* @param ctx The context of the channel close notification.
+*/
+   @Override
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+   IOUtils.cleanup(LOG, bufferResponseDecoder, 
nonBufferResponseDecoder);
+   frameHeaderBuffer.release();
+
+   super.channelInactive(ctx);
+}
+
+@O

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389353672
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NonBufferResponseDecoder.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+
+import java.net.ProtocolException;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+
+/**
+ * The decoder for messages other than {@link BufferResponse}.
+ */
+class NonBufferResponseDecoder extends NettyMessageDecoder {
+
+   /** The initial size of the message header accumulation buffer. */
+   private static final int INITIAL_MESSAGE_HEADER_BUFFER_LENGTH = 128;
+
+   /** The accumulation buffer of the message header. */
+   private ByteBuf messageBuffer;
+
+   @Override
+   public void onChannelActive(ChannelHandlerContext ctx) {
+   messageBuffer = 
ctx.alloc().directBuffer(INITIAL_MESSAGE_HEADER_BUFFER_LENGTH);
+   }
+
+   @Override
+   void onNewMessageReceived(int msgId, int messageLength) {
+   super.onNewMessageReceived(msgId, messageLength);
+   ensureBufferCapacity();
+   }
+
+   @Override
+   public DecodingResult onChannelRead(ByteBuf data) throws Exception {
+   ByteBuf toDecode = ByteBufUtils.accumulate(
 
 Review comment:
   toDecode -> fullMessageBuf


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389353672
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NonBufferResponseDecoder.java
 ##
 @@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+
+import java.net.ProtocolException;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+
+/**
+ * The decoder for messages other than {@link BufferResponse}.
+ */
+class NonBufferResponseDecoder extends NettyMessageDecoder {
+
+   /** The initial size of the message header accumulation buffer. */
+   private static final int INITIAL_MESSAGE_HEADER_BUFFER_LENGTH = 128;
+
+   /** The accumulation buffer of the message header. */
+   private ByteBuf messageBuffer;
+
+   @Override
+   public void onChannelActive(ChannelHandlerContext ctx) {
+   messageBuffer = 
ctx.alloc().directBuffer(INITIAL_MESSAGE_HEADER_BUFFER_LENGTH);
+   }
+
+   @Override
+   void onNewMessageReceived(int msgId, int messageLength) {
+   super.onNewMessageReceived(msgId, messageLength);
+   ensureBufferCapacity();
+   }
+
+   @Override
+   public DecodingResult onChannelRead(ByteBuf data) throws Exception {
+   ByteBuf toDecode = ByteBufUtils.accumulate(
 
 Review comment:
   toDecode -> fullMessage


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-08 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r389353353
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegate.java
 ##
 @@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.runtime.io.network.NetworkClientHandler;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter;
+import org.apache.flink.util.IOUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.FRAME_HEADER_LENGTH;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.MAGIC_NUMBER;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Decodes messages from the received netty buffers. This decoder assumes the
+ * messages have the following format:
+ * +---++
+ * | FRAME_HEADER ||  MESSAGE_HEADER   | DATA BUFFER (Optional) |
+ * +---++
+ *
+ * This decoder decodes the frame header and delegates the following work 
to the
+ * corresponding message decoders according to the message type. During this 
process
+ * The frame header and message header are only accumulated if they span  
received
+ * multiple netty buffers, and the data buffer is copied directly to the buffer
+ * of corresponding input channel to avoid more copying.
+ *
+ * The format of the frame header is
+ * +--+--++
+ * | FRAME LENGTH (4) | MAGIC NUMBER (4) | ID (1) |
+ * +--+--++
+ */
+public class NettyMessageClientDecoderDelegate extends 
ChannelInboundHandlerAdapter {
+   private final Logger LOG = 
LoggerFactory.getLogger(NettyMessageClientDecoderDelegate.class);
+
+   /** The decoder for BufferResponse. */
+private final NettyMessageDecoder bufferResponseDecoder;
+
+/** The decoder for messages other than BufferResponse. */
+   private final NettyMessageDecoder nonBufferResponseDecoder;
+
+   /** The accumulation buffer for the frame header. */
+   private ByteBuf frameHeaderBuffer;
+
+   /** The decoder for the current message. It is null if we are decoding 
the frame header. */
+   private NettyMessageDecoder currentDecoder;
+
+NettyMessageClientDecoderDelegate(NetworkClientHandler 
networkClientHandler) {
+   this.bufferResponseDecoder = new BufferResponseDecoder(
+   new NetworkBufferAllocator(
+   checkNotNull(networkClientHandler)));
+this.nonBufferResponseDecoder = new NonBufferResponseDecoder();
+}
+
+@Override
+public void channelActive(ChannelHandlerContext ctx) throws Exception {
+bufferResponseDecoder.onChannelActive(ctx);
+nonBufferResponseDecoder.onChannelActive(ctx);
+
+   frameHeaderBuffer = 
ctx.alloc().directBuffer(FRAME_HEADER_LENGTH);
+
+   super.channelActive(ctx);
+}
+
+   /**
+* Releases resources when the channel is closed. When exceptions are 
thrown during
+* processing received netty buffers, {@link 
CreditBasedPartitionRequestClientHandler}
+* is expected to catch the exception and close the channel and trigger 
this notification.
+*
+* @param ctx The context of the channel close notification.
+*/
+   @Override
+public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+   IOUtils.cleanup(LOG, bufferResponseDecoder, 
nonBufferResponseDecoder);
+   frameHeaderBuffer.release();
+
+   super.channelInactive(ctx);
+}
+
+@O

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-05 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388168150
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +459,99 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testChannelReleasedBeforeDecodingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeDecodingBufferResponse(false);
+   }
+
+   @Test
+   public void testChannelRemovedBeforeDecodingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeDecodingBufferResponse(true);
+   }
+
+   @Test
+   public void testChannelReleasedBeforeReceivingBufferResponse() throws 
Exception {
+   
testChannelReleasedOrRemovedBeforeReceivingBufferResponse(false);
+   }
+
+   @Test
+   public void testChannelRemovedBeforeReceivingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeReceivingBufferResponse(true);
+   }
+
+   private void 
testChannelReleasedOrRemovedBeforeDecodingBufferResponse(boolean isRemoved) 
throws Exception {
+   int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = new InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndSetToGate(inputGate);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(inputChannel);
+
+   try {
+   BufferResponse bufferResponse = createBufferResponse(
+   TestBufferFactory.createBuffer(bufferSize),
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   // Release the channel.
+   inputGate.close();
+   if (isRemoved) {
+   handler.removeInputChannel(inputChannel);
+   }
+
+   handler.channelRead(null, bufferResponse);
+
+   assertEquals(0, 
inputChannel.getNumberOfQueuedBuffers());
+   assertNotNull(bufferResponse.getBuffer());
+   assertTrue(bufferResponse.getBuffer().isRecycled());
+   } finally {
+   releaseResource(inputGate, networkBufferPool);
+   }
+   }
+
+   private void 
testChannelReleasedOrRemovedBeforeReceivingBufferResponse(boolean isRemoved) 
throws Exception {
+   int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = new InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndSetToGate(inputGate);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = spy(new 
CreditBasedPartitionRequestClientHandler());
+   handler.addInputChannel(inputChannel);
+
+   try {
+   // Release the channel.
+   inputGate.close();
+   if (isRemoved) {
+   handler.removeInputChannel(inputChannel);
+   }
+
+   BufferResponse bufferResponse = createBufferResponse(
+   TestBufferFactory.createBuffer(bufferSize),
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+   handler.channelRead(null, bufferResponse);
+
+   assertEquals(0, 
inputChannel.getNumberOfQueuedBuffers());
+   assertNull(bufferResponse.getBuffer());
+   verify(handler, 
times(1)).cancelRequestFor(eq(inputChannel.getInputChannelId()));
 
 Review comment:
   The above testChannelReleasedOrRemovedBeforeDecodingBufferResponse should 
also cover this verify. Also it is not very suggested using `spy` to verify it. 
We can also verify either via

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-05 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388164491
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +459,99 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testChannelReleasedBeforeDecodingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeDecodingBufferResponse(false);
+   }
+
+   @Test
+   public void testChannelRemovedBeforeDecodingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeDecodingBufferResponse(true);
+   }
+
+   @Test
+   public void testChannelReleasedBeforeReceivingBufferResponse() throws 
Exception {
+   
testChannelReleasedOrRemovedBeforeReceivingBufferResponse(false);
+   }
+
+   @Test
+   public void testChannelRemovedBeforeReceivingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeReceivingBufferResponse(true);
+   }
+
+   private void 
testChannelReleasedOrRemovedBeforeDecodingBufferResponse(boolean isRemoved) 
throws Exception {
+   int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = new InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndSetToGate(inputGate);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(inputChannel);
+
+   try {
+   BufferResponse bufferResponse = createBufferResponse(
+   TestBufferFactory.createBuffer(bufferSize),
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   // Release the channel.
+   inputGate.close();
+   if (isRemoved) {
+   handler.removeInputChannel(inputChannel);
+   }
+
+   handler.channelRead(null, bufferResponse);
+
+   assertEquals(0, 
inputChannel.getNumberOfQueuedBuffers());
+   assertNotNull(bufferResponse.getBuffer());
+   assertTrue(bufferResponse.getBuffer().isRecycled());
+   } finally {
+   releaseResource(inputGate, networkBufferPool);
+   }
+   }
+
+   private void 
testChannelReleasedOrRemovedBeforeReceivingBufferResponse(boolean isRemoved) 
throws Exception {
 
 Review comment:
   testReadBufferResponseAfterReleasingChannel


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-05 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388162275
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +459,99 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testChannelReleasedBeforeDecodingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeDecodingBufferResponse(false);
+   }
+
+   @Test
+   public void testChannelRemovedBeforeDecodingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeDecodingBufferResponse(true);
+   }
+
+   @Test
+   public void testChannelReleasedBeforeReceivingBufferResponse() throws 
Exception {
+   
testChannelReleasedOrRemovedBeforeReceivingBufferResponse(false);
+   }
+
+   @Test
+   public void testChannelRemovedBeforeReceivingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeReceivingBufferResponse(true);
+   }
+
+   private void 
testChannelReleasedOrRemovedBeforeDecodingBufferResponse(boolean isRemoved) 
throws Exception {
 
 Review comment:
   `testChannelReleasedOrRemovedBeforeDecodingBufferResponse` seems not 
described properly. The real tests below is releasing channel after decoding 
buffer response, not before decoding.
   `testReadBufferResponseBeforeReleasingChannel`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-05 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388162275
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +459,99 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testChannelReleasedBeforeDecodingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeDecodingBufferResponse(false);
+   }
+
+   @Test
+   public void testChannelRemovedBeforeDecodingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeDecodingBufferResponse(true);
+   }
+
+   @Test
+   public void testChannelReleasedBeforeReceivingBufferResponse() throws 
Exception {
+   
testChannelReleasedOrRemovedBeforeReceivingBufferResponse(false);
+   }
+
+   @Test
+   public void testChannelRemovedBeforeReceivingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeReceivingBufferResponse(true);
+   }
+
+   private void 
testChannelReleasedOrRemovedBeforeDecodingBufferResponse(boolean isRemoved) 
throws Exception {
 
 Review comment:
   `testChannelReleasedOrRemovedBeforeDecodingBufferResponse` seems not 
described properly. The real tests below is releasing channel after decoding 
buffer response, not before decoding.
   `testDecodingMessageBeforeReleasingChannel`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-05 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388162275
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +459,99 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testChannelReleasedBeforeDecodingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeDecodingBufferResponse(false);
+   }
+
+   @Test
+   public void testChannelRemovedBeforeDecodingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeDecodingBufferResponse(true);
+   }
+
+   @Test
+   public void testChannelReleasedBeforeReceivingBufferResponse() throws 
Exception {
+   
testChannelReleasedOrRemovedBeforeReceivingBufferResponse(false);
+   }
+
+   @Test
+   public void testChannelRemovedBeforeReceivingBufferResponse() throws 
Exception {
+   testChannelReleasedOrRemovedBeforeReceivingBufferResponse(true);
+   }
+
+   private void 
testChannelReleasedOrRemovedBeforeDecodingBufferResponse(boolean isRemoved) 
throws Exception {
 
 Review comment:
   `testChannelReleasedOrRemovedBeforeDecodingBufferResponse` seems not 
described properly. The real tests below is releasing channel after decoding 
buffer response, not before decoding.
   `testDecodingBufferResponseBeforeReleasingChannel`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388097453
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandler.java
 ##
 @@ -249,7 +248,7 @@ private void decodeMsg(Object msg) throws Throwable {
NettyMessage.BufferResponse bufferOrEvent = 
(NettyMessage.BufferResponse) msg;
 
RemoteInputChannel inputChannel = 
inputChannels.get(bufferOrEvent.receiverId);
-   if (inputChannel == null) {
+   if (inputChannel == null || inputChannel.isReleased()) {
 
 Review comment:
   we should consider the case of null data buffer to also cancel request. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388094614
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
 
 Review comment:
   Except the test `testDownstreamMessageDecodeWithReleasedInputChannel`, all 
the other three tests have the same code paths, then we can deduplicate them.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388092398
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   chann

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388092125
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   chann

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388089494
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   chann

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388090118
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   chann

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388089494
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   chann

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388083659
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   chann

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388083507
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
 
 Review comment:
   we migh

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388083357
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   chann

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388082602
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   channel,
+   false,
+   false,
+   false,
+   normalInputChannel.getInputChannelId(),
+   null);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(normalInputChannel);
+
+   EmbeddedChannel channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   testRepartitionMessagesAndDecode(
+   chann

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388076838
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderRemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   int totalBufferRequired = 3;
+
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel normalInputChannel = new 
BufferProviderRemoteInputChannel(inputGate, totalBufferRequired, BUFFER_SIZE);
 
 Review comment:
   normalInputChannel -> inputChannel, do not need to emphasis `normal` in this 
test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388076411
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/consumer/BufferProviderRemoteInputChannel.java
 ##
 @@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.partition.consumer;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.LocalConnectionManager;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.partition.InputChannelTestUtils;
+import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+
+import javax.annotation.Nullable;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Special {@link RemoteInputChannel} implementation that correspond to buffer 
request.
 
 Review comment:
   correspond -> corresponds
   A special


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388076057
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/consumer/BufferProviderRemoteInputChannel.java
 ##
 @@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.partition.consumer;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.LocalConnectionManager;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.partition.InputChannelTestUtils;
+import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+
+import javax.annotation.Nullable;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Special {@link RemoteInputChannel} implementation that correspond to buffer 
request.
+ */
+public class BufferProviderRemoteInputChannel extends RemoteInputChannel {
+   private final int maxNumberOfBuffers;
+   private final int bufferSize;
+
+   private int allocatedBuffers;
+
+   public BufferProviderRemoteInputChannel(
+   SingleInputGate inputGate,
+   int maxNumberOfBuffers,
+   int bufferSize) {
+
+   super(
+   inputGate,
+   0,
+   new ResultPartitionID(),
+   InputChannelBuilder.STUB_CONNECTION_ID,
+   new LocalConnectionManager(),
+   0,
+   0,
+   
InputChannelTestUtils.newUnregisteredInputChannelMetrics(),
+   
InputChannelTestUtils.StubMemorySegmentProvider.getInstance());
+
+   inputGate.setInputChannel(new IntermediateResultPartitionID(), 
this);
+
+   this.maxNumberOfBuffers = maxNumberOfBuffers;
+   this.bufferSize = bufferSize;
+   }
+
+   @Nullable
+   @Override
+   public Buffer requestBuffer() {
+   if (isReleased()) {
+   return null;
+   }
+
+   checkState(allocatedBuffers < maxNumberOfBuffers,
+   String.format("The number of allocated buffers %d have 
reached the maximum allowed %d.", allocatedBuffers, maxNumberOfBuffers));
 
 Review comment:
   have -> has


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388072527
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/ByteBufUtilsTest.java
 ##
 @@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+
+/**
+ * Tests the methods in {@link ByteBufUtils}.
+ */
+public class ByteBufUtilsTest {
+   private static final byte ACCUMULATION_BYTE = 0x7d;
+   private static final byte NON_ACCUMULATION_BYTE = 0x23;
+
+   @Test
+   public void testAccumulateWithoutCopy() {
+   int sourceLength = 128;
+   int sourceReaderIndex = 32;
+   int expectedAccumulationSize = 16;
+
+   ByteBuf src = createSourceBuffer(sourceLength, 
sourceReaderIndex, expectedAccumulationSize);
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src has enough data and no data has been copied yet, src 
will be returned without modification.
+   ByteBuf accumulated = ByteBufUtils.accumulate(target, src, 
expectedAccumulationSize, target.readableBytes());
+
+   assertSame(src, accumulated);
+   assertEquals(sourceReaderIndex, src.readerIndex());
+   verifyBufferContent(src, sourceReaderIndex, 
expectedAccumulationSize);
+   }
+
+   @Test
+   public void testAccumulateWithCopy() {
+   int sourceLength = 128;
+   int firstSourceReaderIndex = 32;
+   int secondSourceReaderIndex = 0;
+   int expectedAccumulationSize = 128;
+
+   int firstCopyLength = sourceLength - firstSourceReaderIndex;
 
 Review comment:
   firstCopyLength -> firstAccumulationSize, also for secondCopyLength. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388067962
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/ByteBufUtilsTest.java
 ##
 @@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+
+/**
+ * Tests the methods in {@link ByteBufUtils}.
+ */
+public class ByteBufUtilsTest {
+   private static final byte ACCUMULATION_BYTE = 0x7d;
+   private static final byte NON_ACCUMULATION_BYTE = 0x23;
+
+   @Test
+   public void testAccumulateWithoutCopy() {
+   int sourceLength = 128;
+   int sourceReaderIndex = 32;
+   int expectedAccumulationSize = 16;
+
+   ByteBuf src = createSourceBuffer(sourceLength, 
sourceReaderIndex, expectedAccumulationSize);
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src has enough data and no data has been copied yet, src 
will be returned without modification.
+   ByteBuf accumulated = ByteBufUtils.accumulate(target, src, 
expectedAccumulationSize, target.readableBytes());
+
+   assertSame(src, accumulated);
+   assertEquals(sourceReaderIndex, src.readerIndex());
+   verifyBufferContent(src, sourceReaderIndex, 
expectedAccumulationSize);
+   }
+
+   @Test
+   public void testAccumulateWithCopy() {
+   int sourceLength = 128;
+   int firstSourceReaderIndex = 32;
+   int secondSourceReaderIndex = 0;
+   int expectedAccumulationSize = 128;
+
+   int firstCopyLength = sourceLength - firstSourceReaderIndex;
+   int secondCopyLength = expectedAccumulationSize - 
firstCopyLength;
+
+   ByteBuf firstSource = createSourceBuffer(sourceLength, 
firstSourceReaderIndex, firstCopyLength);
+   ByteBuf secondSource = createSourceBuffer(sourceLength, 
secondSourceReaderIndex, secondCopyLength);
+
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src does not have enough data, src will be copied into 
target and null will be returned.
+   ByteBuf accumulated = ByteBufUtils.accumulate(
+   target,
+   firstSource,
+   expectedAccumulationSize,
+   target.readableBytes());
+   assertNull(accumulated);
+   assertEquals(sourceLength, firstSource.readerIndex());
+   assertEquals(firstCopyLength, target.readableBytes());
+
+   // The remaining data will be copied from the second buffer, 
and the target buffer will be returned
+   // after all data is accumulated.
+   accumulated = ByteBufUtils.accumulate(
+   target,
+   secondSource,
+   expectedAccumulationSize,
+   target.readableBytes());
+   assertSame(target, accumulated);
+   assertEquals(secondSourceReaderIndex + secondCopyLength, 
secondSource.readerIndex());
+   assertEquals(expectedAccumulationSize, target.readableBytes());
+
+   verifyBufferContent(accumulated, 0, expectedAccumulationSize);
+   }
+
+   /**
+* Create a source buffer whose length is size. The content between 
readerIndex and
 
 Review comment:
   \size\, also for other arguments in javadoc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this servi

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-04 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r388067962
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/ByteBufUtilsTest.java
 ##
 @@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+
+/**
+ * Tests the methods in {@link ByteBufUtils}.
+ */
+public class ByteBufUtilsTest {
+   private static final byte ACCUMULATION_BYTE = 0x7d;
+   private static final byte NON_ACCUMULATION_BYTE = 0x23;
+
+   @Test
+   public void testAccumulateWithoutCopy() {
+   int sourceLength = 128;
+   int sourceReaderIndex = 32;
+   int expectedAccumulationSize = 16;
+
+   ByteBuf src = createSourceBuffer(sourceLength, 
sourceReaderIndex, expectedAccumulationSize);
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src has enough data and no data has been copied yet, src 
will be returned without modification.
+   ByteBuf accumulated = ByteBufUtils.accumulate(target, src, 
expectedAccumulationSize, target.readableBytes());
+
+   assertSame(src, accumulated);
+   assertEquals(sourceReaderIndex, src.readerIndex());
+   verifyBufferContent(src, sourceReaderIndex, 
expectedAccumulationSize);
+   }
+
+   @Test
+   public void testAccumulateWithCopy() {
+   int sourceLength = 128;
+   int firstSourceReaderIndex = 32;
+   int secondSourceReaderIndex = 0;
+   int expectedAccumulationSize = 128;
+
+   int firstCopyLength = sourceLength - firstSourceReaderIndex;
+   int secondCopyLength = expectedAccumulationSize - 
firstCopyLength;
+
+   ByteBuf firstSource = createSourceBuffer(sourceLength, 
firstSourceReaderIndex, firstCopyLength);
+   ByteBuf secondSource = createSourceBuffer(sourceLength, 
secondSourceReaderIndex, secondCopyLength);
+
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src does not have enough data, src will be copied into 
target and null will be returned.
+   ByteBuf accumulated = ByteBufUtils.accumulate(
+   target,
+   firstSource,
+   expectedAccumulationSize,
+   target.readableBytes());
+   assertNull(accumulated);
+   assertEquals(sourceLength, firstSource.readerIndex());
+   assertEquals(firstCopyLength, target.readableBytes());
+
+   // The remaining data will be copied from the second buffer, 
and the target buffer will be returned
+   // after all data is accumulated.
+   accumulated = ByteBufUtils.accumulate(
+   target,
+   secondSource,
+   expectedAccumulationSize,
+   target.readableBytes());
+   assertSame(target, accumulated);
+   assertEquals(secondSourceReaderIndex + secondCopyLength, 
secondSource.readerIndex());
+   assertEquals(expectedAccumulationSize, target.readableBytes());
+
+   verifyBufferContent(accumulated, 0, expectedAccumulationSize);
+   }
+
+   /**
+* Create a source buffer whose length is size. The content between 
readerIndex and
 
 Review comment:
   \size\


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386310570
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/consumer/BufferProviderInputChannelBuilder.java
 ##
 @@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.partition.consumer;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.LocalConnectionManager;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.partition.InputChannelTestUtils;
+import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
+
+import javax.annotation.Nullable;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Builder for special {@link RemoteInputChannel} that correspond to buffer 
request, allow users to
+ * set InputChannelId and release state.
+ */
+public class BufferProviderInputChannelBuilder {
+   private SingleInputGate inputGate = new 
SingleInputGateBuilder().build();
+   private InputChannelID id = new InputChannelID();
+   private int maxNumberOfBuffers = Integer.MAX_VALUE;
+   private int bufferSize = 32 * 1024;
+   private boolean isReleased = false;
+
+   public BufferProviderInputChannelBuilder setInputGate(SingleInputGate 
inputGate) {
+   this.inputGate = inputGate;
+   return this;
+   }
+
+   public BufferProviderInputChannelBuilder setId(InputChannelID id) {
+   this.id = id;
+   return this;
+   }
+
+   public BufferProviderInputChannelBuilder setMaxNumberOfBuffers(int 
maxNumberOfBuffers) {
+   this.maxNumberOfBuffers = maxNumberOfBuffers;
+   return this;
+   }
+
+   public BufferProviderInputChannelBuilder setBufferSize(int bufferSize) {
 
 Review comment:
   We can remove it if not used atm


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386310024
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, false, false);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   // 4 buffers required.
+   testRepartitionMessagesAndDecode(4, true, false, false);
+   }
+
+   /**
+* Verifies that NettyMessageDecoder works well with buffers sent to a 
released  and removed input channels.
+* For such channels, no Buffer is available to receive the data buffer 
in the message, and the data buffer
+* part should be discarded before reading the next message.
+*/
+   @Test
+   public void 
testDownstreamMessageDecodeWithReleasedAndRemovedInputChannel() throws 
Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, true, true);
+   }
+
+   
//--
+
+   private void testRepartitionMessagesAndDecode(
+   int numberOfBuffersInNormalChannel,
+   boolean hasEmptyBuffer,
+   boolean hasBufferForReleasedChannel,
+   boolean hasBufferForRemovedChannel) throws Exception {
+
+   EmbeddedChannel channel = 
createPartitionRequestClientHandler(numberOfBuffersInNormalChannel);
+
+   try {
+   List messages = 
createMessageList(hasEmptyBuffer, hasBufferForReleasedChannel, 
hasBufferForRemovedChannel);
+   repartitionMessagesAndVerifyDecoding(channel, messages);
+   } finally {
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386309746
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, false, false);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   // 4 buffers required.
+   testRepartitionMessagesAndDecode(4, true, false, false);
+   }
+
+   /**
+* Verifies that NettyMessageDecoder works well with buffers sent to a 
released  and removed input channels.
+* For such channels, no Buffer is available to receive the data buffer 
in the message, and the data buffer
+* part should be discarded before reading the next message.
+*/
+   @Test
+   public void 
testDownstreamMessageDecodeWithReleasedAndRemovedInputChannel() throws 
Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, true, true);
+   }
+
+   
//--
+
+   private void testRepartitionMessagesAndDecode(
+   int numberOfBuffersInNormalChannel,
+   boolean hasEmptyBuffer,
+   boolean hasBufferForReleasedChannel,
+   boolean hasBufferForRemovedChannel) throws Exception {
+
+   EmbeddedChannel channel = 
createPartitionRequestClientHandler(numberOfBuffersInNormalChannel);
+
+   try {
+   List messages = 
createMessageList(hasEmptyBuffer, hasBufferForReleasedChannel, 
hasBufferForRemovedChannel);
+   repartitionMessagesAndVerifyDecoding(channel, messages);
+   } finally {
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386306388
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, false, false);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   // 4 buffers required.
+   testRepartitionMessagesAndDecode(4, true, false, false);
+   }
+
+   /**
+* Verifies that NettyMessageDecoder works well with buffers sent to a 
released  and removed input channels.
+* For such channels, no Buffer is available to receive the data buffer 
in the message, and the data buffer
+* part should be discarded before reading the next message.
+*/
+   @Test
+   public void 
testDownstreamMessageDecodeWithReleasedAndRemovedInputChannel() throws 
Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, true, true);
+   }
+
+   
//--
+
+   private void testRepartitionMessagesAndDecode(
+   int numberOfBuffersInNormalChannel,
+   boolean hasEmptyBuffer,
+   boolean hasBufferForReleasedChannel,
+   boolean hasBufferForRemovedChannel) throws Exception {
+
+   EmbeddedChannel channel = 
createPartitionRequestClientHandler(numberOfBuffersInNormalChannel);
+
+   try {
+   List messages = 
createMessageList(hasEmptyBuffer, hasBufferForReleasedChannel, 
hasBufferForRemovedChannel);
+   repartitionMessagesAndVerifyDecoding(channel, messages);
+   } finally {
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386305106
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
 
 Review comment:
   If we want to explain the meaning of `3`, maybe we can define a local var 
for it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386302580
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, false, false);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   // 4 buffers required.
+   testRepartitionMessagesAndDecode(4, true, false, false);
+   }
+
+   /**
+* Verifies that NettyMessageDecoder works well with buffers sent to a 
released  and removed input channels.
+* For such channels, no Buffer is available to receive the data buffer 
in the message, and the data buffer
+* part should be discarded before reading the next message.
+*/
+   @Test
+   public void 
testDownstreamMessageDecodeWithReleasedAndRemovedInputChannel() throws 
Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, true, true);
 
 Review comment:
   Can we only verify one case for a test? I mean separating the release and 
empty buffer cases.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386301054
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, false, false);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   // 4 buffers required.
+   testRepartitionMessagesAndDecode(4, true, false, false);
+   }
+
+   /**
+* Verifies that NettyMessageDecoder works well with buffers sent to a 
released  and removed input channels.
+* For such channels, no Buffer is available to receive the data buffer 
in the message, and the data buffer
+* part should be discarded before reading the next message.
+*/
+   @Test
+   public void 
testDownstreamMessageDecodeWithReleasedAndRemovedInputChannel() throws 
Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, true, true);
+   }
+
+   
//--
+
+   private void testRepartitionMessagesAndDecode(
+   int numberOfBuffersInNormalChannel,
+   boolean hasEmptyBuffer,
+   boolean hasBufferForReleasedChannel,
+   boolean hasBufferForRemovedChannel) throws Exception {
+
+   EmbeddedChannel channel = 
createPartitionRequestClientHandler(numberOfBuffersInNormalChannel);
+
+   try {
+   List messages = 
createMessageList(hasEmptyBuffer, hasBufferForReleasedChannel, 
hasBufferForRemovedChannel);
+   repartitionMessagesAndVerifyDecoding(channel, messages);
+   } finally {
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386300489
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, false, false);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   // 4 buffers required.
+   testRepartitionMessagesAndDecode(4, true, false, false);
+   }
+
+   /**
+* Verifies that NettyMessageDecoder works well with buffers sent to a 
released  and removed input channels.
+* For such channels, no Buffer is available to receive the data buffer 
in the message, and the data buffer
+* part should be discarded before reading the next message.
+*/
+   @Test
+   public void 
testDownstreamMessageDecodeWithReleasedAndRemovedInputChannel() throws 
Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, true, true);
+   }
+
+   
//--
+
+   private void testRepartitionMessagesAndDecode(
+   int numberOfBuffersInNormalChannel,
+   boolean hasEmptyBuffer,
+   boolean hasBufferForReleasedChannel,
+   boolean hasBufferForRemovedChannel) throws Exception {
+
+   EmbeddedChannel channel = 
createPartitionRequestClientHandler(numberOfBuffersInNormalChannel);
+
+   try {
 
 Review comment:
   `try (channel = 
createPartitionRequestClientHandler(numberOfBuffersInNormalChannel))`, then we 
can remove finally part.


This is an automated message from the

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386295617
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, false, false);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   // 4 buffers required.
+   testRepartitionMessagesAndDecode(4, true, false, false);
+   }
+
+   /**
+* Verifies that NettyMessageDecoder works well with buffers sent to a 
released  and removed input channels.
+* For such channels, no Buffer is available to receive the data buffer 
in the message, and the data buffer
+* part should be discarded before reading the next message.
+*/
+   @Test
+   public void 
testDownstreamMessageDecodeWithReleasedAndRemovedInputChannel() throws 
Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, true, true);
+   }
+
+   
//--
+
+   private void testRepartitionMessagesAndDecode(
+   int numberOfBuffersInNormalChannel,
+   boolean hasEmptyBuffer,
+   boolean hasBufferForReleasedChannel,
+   boolean hasBufferForRemovedChannel) throws Exception {
+
+   EmbeddedChannel channel = 
createPartitionRequestClientHandler(numberOfBuffersInNormalChannel);
+
+   try {
+   List messages = 
createMessageList(hasEmptyBuffer, hasBufferForReleasedChannel, 
hasBufferForRemovedChannel);
+   repartitionMessagesAndVerifyDecoding(channel, messages);
+   } finally {
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386241044
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -446,7 +546,11 @@ private static BufferResponse createBufferResponse(
// Deserialize the bytes again. We have to go this way, because 
we only partly deserialize
// the header of the response and wait for a buffer from the 
buffer pool to copy the payload
// data into.
 
 Review comment:
   `wait for a buffer from the buffer pool to copy the payload data into.` 
should be properly adjusted?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386237138
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +456,68 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReceivedBufferForRemovedChannel() throws Exception {
+   final int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = 
createRemoteInputChannel(inputGate, null, networkBufferPool);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(inputChannel);
+
+   try {
+   Buffer buffer = 
TestBufferFactory.createBuffer(bufferSize);
+   BufferResponse bufferResponse = createBufferResponse(
+   buffer,
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   handler.removeInputChannel(inputChannel);
+   handler.channelRead(null, bufferResponse);
+
+   assertNotNull(bufferResponse.getBuffer());
+   assertTrue(bufferResponse.getBuffer().isRecycled());
+   } finally {
+   releaseResource(inputGate, networkBufferPool);
+   }
+   }
+
+   @Test
+   public void testReceivedBufferForReleasedChannel() throws Exception {
+   final int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = 
createRemoteInputChannel(inputGate, null, networkBufferPool);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(inputChannel);
+
+   try {
+   Buffer buffer = 
TestBufferFactory.createBuffer(bufferSize);
+   BufferResponse bufferResponse = createBufferResponse(
+   buffer,
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   inputGate.close();
 
 Review comment:
   Can you check whether we already have the case that releasing the channel 
before `createBufferResponse`, then we can verify whether the created 
`BufferResponse` has the `null` data buffer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386236936
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +456,68 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReceivedBufferForRemovedChannel() throws Exception {
+   final int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = 
createRemoteInputChannel(inputGate, null, networkBufferPool);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(inputChannel);
+
+   try {
+   Buffer buffer = 
TestBufferFactory.createBuffer(bufferSize);
+   BufferResponse bufferResponse = createBufferResponse(
+   buffer,
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   handler.removeInputChannel(inputChannel);
+   handler.channelRead(null, bufferResponse);
+
+   assertNotNull(bufferResponse.getBuffer());
+   assertTrue(bufferResponse.getBuffer().isRecycled());
+   } finally {
+   releaseResource(inputGate, networkBufferPool);
+   }
+   }
+
+   @Test
+   public void testReceivedBufferForReleasedChannel() throws Exception {
+   final int bufferSize = 1024;
 
 Review comment:
   ditto: final


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386235708
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +456,68 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReceivedBufferForRemovedChannel() throws Exception {
+   final int bufferSize = 1024;
 
 Review comment:
   nit: remove `final` to keep consistent in this test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386235368
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +456,68 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReceivedBufferForRemovedChannel() throws Exception {
+   final int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = 
createRemoteInputChannel(inputGate, null, networkBufferPool);
 
 Review comment:
   Use this way instead? Then we do not rely on `null` 
`PartitionRequestClient`, and I guess `createRemoteInputChannel` is mainly for 
indicating the required `PartitionRequestClient`.
   ```
   InputChannelBuilder.newBuilder()
   .setMemorySegmentProvider(networkBufferPool)
   .buildRemoteAndSetToGate(inputGate);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386233930
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -419,6 +456,68 @@ public void testNotifyCreditAvailableAfterReleased() 
throws Exception {
}
}
 
+   @Test
+   public void testReceivedBufferForRemovedChannel() throws Exception {
+   final int bufferSize = 1024;
+
+   NetworkBufferPool networkBufferPool = new NetworkBufferPool(10, 
bufferSize, 2);
+   SingleInputGate inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = 
createRemoteInputChannel(inputGate, null, networkBufferPool);
+   inputGate.assignExclusiveSegments();
+
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   handler.addInputChannel(inputChannel);
+
+   try {
+   Buffer buffer = 
TestBufferFactory.createBuffer(bufferSize);
+   BufferResponse bufferResponse = createBufferResponse(
+   buffer,
+   0,
+   inputChannel.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
+
+   handler.removeInputChannel(inputChannel);
+   handler.channelRead(null, bufferResponse);
+
+   assertNotNull(bufferResponse.getBuffer());
+   assertTrue(bufferResponse.getBuffer().isRecycled());
 
 Review comment:
   add this verify `assertEquals(0, inputChannel.getNumberOfQueuedBuffers())`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386230247
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -339,7 +368,11 @@ public void testNotifyCreditAvailable() throws Exception {
 
// Trigger notify credits availability via buffer 
response on the condition of an un-writable channel
final BufferResponse bufferResponse3 = 
createBufferResponse(
-   TestBufferFactory.createBuffer(32), 1, 
inputChannel1.getInputChannelId(), 1);
+   TestBufferFactory.createBuffer(32),
+   1,
+   inputChannel1.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
 
 Review comment:
   ditto: reuse previous `NetworkBufferAllocator`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-01 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386230094
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/CreditBasedPartitionRequestClientHandlerTest.java
 ##
 @@ -312,9 +333,17 @@ public void testNotifyCreditAvailable() throws Exception {
// The buffer response will take one available buffer 
from input channel, and it will trigger
// requesting (backlog + numExclusiveBuffers - 
numAvailableBuffers) floating buffers
final BufferResponse bufferResponse1 = 
createBufferResponse(
-   TestBufferFactory.createBuffer(32), 0, 
inputChannel1.getInputChannelId(), 1);
+   TestBufferFactory.createBuffer(32),
+   0,
+   inputChannel1.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
final BufferResponse bufferResponse2 = 
createBufferResponse(
-   TestBufferFactory.createBuffer(32), 0, 
inputChannel2.getInputChannelId(), 1);
+   TestBufferFactory.createBuffer(32),
+   0,
+   inputChannel2.getInputChannelId(),
+   1,
+   new NetworkBufferAllocator(handler));
 
 Review comment:
   nit : `NetworkBufferAllocator` is created before `try` only once, then all 
these three `BufferResponse` can reuse it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-28 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r385567830
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,411 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.PartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.function.Function;
+import java.util.function.Supplier;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.mock;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 6 buffers required for running 2 rounds and 3 buffers each 
round.
+   NettyChannelAndInputChannelIds context = 
createPartitionRequestClientHandler(6);
+
+   Supplier messagesSupplier = () -> {
+   Buffer event = createDataBuffer(32);
+   event.tagAsEvent();
+
+   return new NettyMessage[]{
+   new 
NettyMessage.BufferResponse(createDataBuffer(128), 0, 
context.getNormalChannelId(), 4),
+   new 
NettyMessage.BufferResponse(createDataBuffer(256), 1, 
context.getNormalChannelId(), 3),
+   new NettyMessage.BufferResponse(event, 2, 
context.getNormalChannelId(), 4),
+   new NettyMessage.ErrorResponse(new 
RuntimeException("test"), context.getNormalChannelId()),
+   new 
NettyMessage.BufferResponse(createDataBuffer(56), 3, 
context.getNormalChannelId(), 4)
+   };
+   };
+
+   repartitionMessagesAndVerifyDecoding(
+   context,
+   messagesSupplier,
+   (int[] sizes) -> new int[]{
+   sizes[0] / 3,
+   sizes[0] + sizes[1] + sizes[2] / 3,
+   sizes[0] + sizes[1] + sizes[2] + sizes[3] / 3 * 
2,
+   sizes[0] + sizes[1] + sizes[2] + sizes[3] + 
sizes[4] / 3 * 2
+   });
+
+   repartitionMessagesAndVerifyDecoding(
+   context,
+   messagesSupplier,
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-28 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r385567830
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,411 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.PartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.function.Function;
+import java.util.function.Supplier;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.mock;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 6 buffers required for running 2 rounds and 3 buffers each 
round.
+   NettyChannelAndInputChannelIds context = 
createPartitionRequestClientHandler(6);
+
+   Supplier messagesSupplier = () -> {
+   Buffer event = createDataBuffer(32);
+   event.tagAsEvent();
+
+   return new NettyMessage[]{
+   new 
NettyMessage.BufferResponse(createDataBuffer(128), 0, 
context.getNormalChannelId(), 4),
+   new 
NettyMessage.BufferResponse(createDataBuffer(256), 1, 
context.getNormalChannelId(), 3),
+   new NettyMessage.BufferResponse(event, 2, 
context.getNormalChannelId(), 4),
+   new NettyMessage.ErrorResponse(new 
RuntimeException("test"), context.getNormalChannelId()),
+   new 
NettyMessage.BufferResponse(createDataBuffer(56), 3, 
context.getNormalChannelId(), 4)
+   };
+   };
+
+   repartitionMessagesAndVerifyDecoding(
+   context,
+   messagesSupplier,
+   (int[] sizes) -> new int[]{
+   sizes[0] / 3,
+   sizes[0] + sizes[1] + sizes[2] / 3,
+   sizes[0] + sizes[1] + sizes[2] + sizes[3] / 3 * 
2,
+   sizes[0] + sizes[1] + sizes[2] + sizes[3] + 
sizes[4] / 3 * 2
+   });
+
+   repartitionMessagesAndVerifyDecoding(
+   context,
+   messagesSupplier,
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-28 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r385564299
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,411 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.PartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.function.Function;
+import java.util.function.Supplier;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.mock;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 6 buffers required for running 2 rounds and 3 buffers each 
round.
+   NettyChannelAndInputChannelIds context = 
createPartitionRequestClientHandler(6);
+
+   Supplier messagesSupplier = () -> {
+   Buffer event = createDataBuffer(32);
+   event.tagAsEvent();
+
+   return new NettyMessage[]{
+   new 
NettyMessage.BufferResponse(createDataBuffer(128), 0, 
context.getNormalChannelId(), 4),
+   new 
NettyMessage.BufferResponse(createDataBuffer(256), 1, 
context.getNormalChannelId(), 3),
+   new NettyMessage.BufferResponse(event, 2, 
context.getNormalChannelId(), 4),
+   new NettyMessage.ErrorResponse(new 
RuntimeException("test"), context.getNormalChannelId()),
+   new 
NettyMessage.BufferResponse(createDataBuffer(56), 3, 
context.getNormalChannelId(), 4)
+   };
+   };
+
+   repartitionMessagesAndVerifyDecoding(
+   context,
+   messagesSupplier,
+   (int[] sizes) -> new int[]{
+   sizes[0] / 3,
+   sizes[0] + sizes[1] + sizes[2] / 3,
+   sizes[0] + sizes[1] + sizes[2] + sizes[3] / 3 * 
2,
+   sizes[0] + sizes[1] + sizes[2] + sizes[3] + 
sizes[4] / 3 * 2
+   });
+
+   repartitionMessagesAndVerifyDecoding(
+   context,
+   messagesSupplier,
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-28 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r385561318
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,411 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.PartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.function.Function;
+import java.util.function.Supplier;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.mock;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 6 buffers required for running 2 rounds and 3 buffers each 
round.
+   NettyChannelAndInputChannelIds context = 
createPartitionRequestClientHandler(6);
+
+   Supplier messagesSupplier = () -> {
+   Buffer event = createDataBuffer(32);
+   event.tagAsEvent();
+
+   return new NettyMessage[]{
+   new 
NettyMessage.BufferResponse(createDataBuffer(128), 0, 
context.getNormalChannelId(), 4),
+   new 
NettyMessage.BufferResponse(createDataBuffer(256), 1, 
context.getNormalChannelId(), 3),
+   new NettyMessage.BufferResponse(event, 2, 
context.getNormalChannelId(), 4),
+   new NettyMessage.ErrorResponse(new 
RuntimeException("test"), context.getNormalChannelId()),
+   new 
NettyMessage.BufferResponse(createDataBuffer(56), 3, 
context.getNormalChannelId(), 4)
+   };
+   };
+
+   repartitionMessagesAndVerifyDecoding(
+   context,
+   messagesSupplier,
+   (int[] sizes) -> new int[]{
+   sizes[0] / 3,
+   sizes[0] + sizes[1] + sizes[2] / 3,
+   sizes[0] + sizes[1] + sizes[2] + sizes[3] / 3 * 
2,
+   sizes[0] + sizes[1] + sizes[2] + sizes[3] + 
sizes[4] / 3 * 2
+   });
+
+   repartitionMessagesAndVerifyDecoding(
+   context,
+   messagesSupplier,
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-28 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r385559423
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,411 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.PartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.function.Function;
+import java.util.function.Supplier;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.mock;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 6 buffers required for running 2 rounds and 3 buffers each 
round.
+   NettyChannelAndInputChannelIds context = 
createPartitionRequestClientHandler(6);
+
+   Supplier messagesSupplier = () -> {
+   Buffer event = createDataBuffer(32);
+   event.tagAsEvent();
+
+   return new NettyMessage[]{
+   new 
NettyMessage.BufferResponse(createDataBuffer(128), 0, 
context.getNormalChannelId(), 4),
+   new 
NettyMessage.BufferResponse(createDataBuffer(256), 1, 
context.getNormalChannelId(), 3),
+   new NettyMessage.BufferResponse(event, 2, 
context.getNormalChannelId(), 4),
+   new NettyMessage.ErrorResponse(new 
RuntimeException("test"), context.getNormalChannelId()),
+   new 
NettyMessage.BufferResponse(createDataBuffer(56), 3, 
context.getNormalChannelId(), 4)
+   };
+   };
+
+   repartitionMessagesAndVerifyDecoding(
+   context,
+   messagesSupplier,
+   (int[] sizes) -> new int[]{
 
 Review comment:
   It is hard to understand the rules to split the message and it is done 
differently in different tests.
   I suggest simplifying this logic to split the message in fixed-length, E.g. 
the size of `BufferResponse` is 128, then we can split it based on fixed 30. 
Then we do not need to pass this argument which can be hidden inside the split 
logic.


This is an automated message from 

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-28 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r385554026
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,411 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.PartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.function.Function;
+import java.util.function.Supplier;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.mock;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 6 buffers required for running 2 rounds and 3 buffers each 
round.
+   NettyChannelAndInputChannelIds context = 
createPartitionRequestClientHandler(6);
+
+   Supplier messagesSupplier = () -> {
 
 Review comment:
   Can we extract a method `generateNettyMessages` and return `NettyMessage[]` 
instead for all the tests in this class? Maybe we can introduce some arguments 
to indicate which kind of messages are needed for different tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-27 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r385541960
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/ByteBufUtilsTest.java
 ##
 @@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+
+/**
+ * Tests the methods in {@link ByteBufUtils}.
+ */
+public class ByteBufUtilsTest {
+
+   @Test
+   public void testAccumulateWithoutCopy() {
+   final int sourceLength = 128;
+   final int sourceStartPosition = 32;
+   final int expectedAccumulationSize = 16;
+
+   ByteBuf src = createSourceBuffer(sourceLength, 
sourceStartPosition);
+
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src has enough data and no data has been copied yet, src 
will be returned without modification.
+   ByteBuf accumulated = ByteBufUtils.accumulate(target, src, 
expectedAccumulationSize, target.readableBytes());
+
+   assertSame(src, accumulated);
+   assertEquals(sourceStartPosition, src.readerIndex());
+
+   verifyBufferContent(src, sourceStartPosition, sourceLength - 
sourceStartPosition, sourceStartPosition);
+   }
+
+   @Test
+   public void testAccumulateWithCopy() {
+   final int firstSourceLength = 128;
+   final int firstSourceStartPosition = 32;
+   final int secondSourceLength = 64;
+   final int secondSourceStartPosition = 0;
+   final int expectedAccumulationSize = 128;
+
+   final int firstCopyLength = firstSourceLength - 
firstSourceStartPosition;
+   final int secondCopyLength = expectedAccumulationSize - 
firstCopyLength;
+
+   ByteBuf firstSource = createSourceBuffer(firstSourceLength, 
firstSourceStartPosition);
+   ByteBuf secondSource = createSourceBuffer(secondSourceLength, 
secondSourceStartPosition);
+
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src does not have enough data, src will be copied into 
target and null will be returned.
+   ByteBuf accumulated = ByteBufUtils.accumulate(
+   target,
+   firstSource,
+   expectedAccumulationSize,
+   target.readableBytes());
+   assertNull(accumulated);
+   assertEquals(firstSourceLength, firstSource.readerIndex());
+   assertEquals(firstCopyLength, target.readableBytes());
+
+   // The remaining data will be copied from the second buffer, 
and the target buffer will be returned
+   // after all data is accumulated.
+   accumulated = ByteBufUtils.accumulate(
+   target,
+   secondSource,
+   expectedAccumulationSize,
+   target.readableBytes());
+   assertSame(target, accumulated);
+   assertEquals(secondSourceStartPosition + secondCopyLength, 
secondSource.readerIndex());
+   assertEquals(expectedAccumulationSize, target.readableBytes());
+
+   verifyBufferContent(accumulated, 0, firstCopyLength, 
firstSourceStartPosition);
+   verifyBufferContent(accumulated, firstCopyLength, 
secondCopyLength, secondSourceStartPosition);
+   }
+
+   private ByteBuf createSourceBuffer(int size, int readerIndex) {
+   ByteBuf buf = Unpooled.buffer(size);
+   for (int i = 0; i < size; ++i) {
+   buf.writeByte((byte) i);
+   }
+
+   buf.readerIndex(readerIndex);
+
+ 

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-27 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r385540907
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/ByteBufUtilsTest.java
 ##
 @@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+
+/**
+ * Tests the methods in {@link ByteBufUtils}.
+ */
+public class ByteBufUtilsTest {
+
+   @Test
+   public void testAccumulateWithoutCopy() {
+   final int sourceLength = 128;
+   final int sourceStartPosition = 32;
+   final int expectedAccumulationSize = 16;
+
+   ByteBuf src = createSourceBuffer(sourceLength, 
sourceStartPosition);
+
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src has enough data and no data has been copied yet, src 
will be returned without modification.
+   ByteBuf accumulated = ByteBufUtils.accumulate(target, src, 
expectedAccumulationSize, target.readableBytes());
+
+   assertSame(src, accumulated);
+   assertEquals(sourceStartPosition, src.readerIndex());
+
+   verifyBufferContent(src, sourceStartPosition, sourceLength - 
sourceStartPosition, sourceStartPosition);
+   }
+
+   @Test
+   public void testAccumulateWithCopy() {
+   final int firstSourceLength = 128;
+   final int firstSourceStartPosition = 32;
+   final int secondSourceLength = 64;
+   final int secondSourceStartPosition = 0;
+   final int expectedAccumulationSize = 128;
+
+   final int firstCopyLength = firstSourceLength - 
firstSourceStartPosition;
+   final int secondCopyLength = expectedAccumulationSize - 
firstCopyLength;
+
+   ByteBuf firstSource = createSourceBuffer(firstSourceLength, 
firstSourceStartPosition);
+   ByteBuf secondSource = createSourceBuffer(secondSourceLength, 
secondSourceStartPosition);
+
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src does not have enough data, src will be copied into 
target and null will be returned.
+   ByteBuf accumulated = ByteBufUtils.accumulate(
+   target,
+   firstSource,
+   expectedAccumulationSize,
+   target.readableBytes());
+   assertNull(accumulated);
+   assertEquals(firstSourceLength, firstSource.readerIndex());
+   assertEquals(firstCopyLength, target.readableBytes());
+
+   // The remaining data will be copied from the second buffer, 
and the target buffer will be returned
+   // after all data is accumulated.
+   accumulated = ByteBufUtils.accumulate(
+   target,
+   secondSource,
+   expectedAccumulationSize,
+   target.readableBytes());
+   assertSame(target, accumulated);
+   assertEquals(secondSourceStartPosition + secondCopyLength, 
secondSource.readerIndex());
+   assertEquals(expectedAccumulationSize, target.readableBytes());
+
+   verifyBufferContent(accumulated, 0, firstCopyLength, 
firstSourceStartPosition);
 
 Review comment:
   I guess we can remove this intermediate result, and only verify the final 
result which can cover it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us.

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-27 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r385540280
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/ByteBufUtilsTest.java
 ##
 @@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+
+/**
+ * Tests the methods in {@link ByteBufUtils}.
+ */
+public class ByteBufUtilsTest {
+
+   @Test
+   public void testAccumulateWithoutCopy() {
+   final int sourceLength = 128;
+   final int sourceStartPosition = 32;
+   final int expectedAccumulationSize = 16;
+
+   ByteBuf src = createSourceBuffer(sourceLength, 
sourceStartPosition);
+
+   ByteBuf target = Unpooled.buffer(expectedAccumulationSize);
+
+   // If src has enough data and no data has been copied yet, src 
will be returned without modification.
+   ByteBuf accumulated = ByteBufUtils.accumulate(target, src, 
expectedAccumulationSize, target.readableBytes());
+
+   assertSame(src, accumulated);
+   assertEquals(sourceStartPosition, src.readerIndex());
+
+   verifyBufferContent(src, sourceStartPosition, sourceLength - 
sourceStartPosition, sourceStartPosition);
+   }
+
+   @Test
+   public void testAccumulateWithCopy() {
+   final int firstSourceLength = 128;
+   final int firstSourceStartPosition = 32;
+   final int secondSourceLength = 64;
 
 Review comment:
   we can make two sources have same length to achieve the same goal, then to 
void too many vars.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-27 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r385538145
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/ByteBufUtilsTest.java
 ##
 @@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+
+/**
+ * Tests the methods in {@link ByteBufUtils}.
+ */
+public class ByteBufUtilsTest {
+
+   @Test
+   public void testAccumulateWithoutCopy() {
+   final int sourceLength = 128;
 
 Review comment:
   remove final to keep consistency with others.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-27 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r385538145
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/ByteBufUtilsTest.java
 ##
 @@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+
+/**
+ * Tests the methods in {@link ByteBufUtils}.
+ */
+public class ByteBufUtilsTest {
+
+   @Test
+   public void testAccumulateWithoutCopy() {
+   final int sourceLength = 128;
 
 Review comment:
   remove final to keep consistent with others.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >