isapego commented on a change in pull request #252:
URL: https://github.com/apache/ignite-3/pull/252#discussion_r680809076
##########
File path:
modules/client-common/src/main/java/org/apache/ignite/client/proto/ClientMessageDecoder.java
##########
@@ -17,48 +17,43 @@
package org.apache.ignite.client.proto;
-import java.nio.ByteBuffer;
import java.util.Arrays;
-import java.util.List;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
-import io.netty.handler.codec.ByteToMessageDecoder;
+import io.netty.handler.codec.LengthFieldBasedFrameDecoder;
import io.netty.util.CharsetUtil;
import org.apache.ignite.lang.IgniteException;
+import static org.apache.ignite.client.proto.ClientMessageCommon.HEADER_SIZE;
+import static org.apache.ignite.client.proto.ClientMessageCommon.MAGIC_BYTES;
+
/**
* Decodes full client messages:
* 1. MAGIC for first message.
- * 2. Payload length (varint).
- * 3. Payload (bytes).
+ * 2. Payload length (4 bytes).
+ * 3. Payload (N bytes).
*/
-public class ClientMessageDecoder extends ByteToMessageDecoder {
- /** Magic bytes before handshake. */
- public static final byte[] MAGIC_BYTES = new byte[]{0x49, 0x47, 0x4E,
0x49}; // IGNI
-
- /** Data buffer. */
- private byte[] data = new byte[4]; // TODO: Pooled buffers IGNITE-15162.
-
- /** Remaining byte count. */
- private int cnt = -4;
-
- /** Message size. */
- private int msgSize = -1;
-
+public class ClientMessageDecoder extends LengthFieldBasedFrameDecoder {
/** Magic decoded flag. */
private boolean magicDecoded;
/** Magic decoding failed flag. */
private boolean magicFailed;
+ /**
+ * Constructor.
+ */
+ public ClientMessageDecoder() {
+ super(Integer.MAX_VALUE - HEADER_SIZE, 0, HEADER_SIZE, 0, HEADER_SIZE,
true);
Review comment:
We have magic bytes to deal with random data. Or do you mean
intentionally large messages to cause out of memory on the node? Personally I
don't think we should handle this problem by limiting max message size - it may
cause various usability issues. More than that, it does not actually solves
anything, as one could flood node with smaller but incomplete messages for the
same reason. Anyway, handling various memory overflow attacks is a complex
thing and is out of scope of this ticket, I believe.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]