LinShunKang commented on code in PR #12545: URL: https://github.com/apache/kafka/pull/12545#discussion_r1174387569
########## clients/src/main/java/org/apache/kafka/common/serialization/DoubleDeserializer.java: ########## @@ -35,4 +41,22 @@ public Double deserialize(String topic, byte[] data) { } return Double.longBitsToDouble(value); } + + @Override + public Double deserialize(String topic, Headers headers, ByteBuffer data) { + if (data == null) { + return null; + } + + if (data.remaining() != 8) { + throw new SerializationException("Size of data received by DoubleDeserializer is not 8"); + } + + final ByteOrder srcOrder = data.order(); + data.order(BIG_ENDIAN); + + final double value = data.getDouble(data.position()); Review Comment: > Yes, that's because we parse it by ourselves from byte array. But now, we have API provided by jdk (ByteBuffer) to allow us read according to the byte order in the buffer. So, I'm not convinced why we still need to read using `BIG_ENDIAN` order? Do you have other reason you think should stick to `BIG_ENDIAN` order? No, that's the only reason I chose method 1. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org