hachikuji commented on a change in pull request #9998:
URL: https://github.com/apache/kafka/pull/9998#discussion_r569647498



##########
File path: 
metadata/src/main/java/org/apache/kafka/metadata/MetadataRecordSerde.java
##########
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.metadata;
+
+import org.apache.kafka.common.metadata.MetadataRecordType;
+import org.apache.kafka.common.protocol.ApiMessage;
+import org.apache.kafka.common.protocol.ObjectSerializationCache;
+import org.apache.kafka.common.protocol.Readable;
+import org.apache.kafka.common.protocol.Writable;
+import org.apache.kafka.common.utils.ByteUtils;
+import org.apache.kafka.raft.RecordSerde;
+
+public class MetadataRecordSerde implements RecordSerde<ApiMessageAndVersion> {
+
+    @Override
+    public ObjectSerializationCache newWriteContext() {
+        return new ObjectSerializationCache();
+    }
+
+    @Override
+    public int recordSize(ApiMessageAndVersion data, Object context) {
+        ObjectSerializationCache serializationCache = 
(ObjectSerializationCache) context;
+        int size = 0;
+        size += ByteUtils.sizeOfUnsignedVarint(data.message().apiKey());
+        size += ByteUtils.sizeOfUnsignedVarint(data.version());
+        size += data.message().size(serializationCache, data.version());
+        return size;
+    }
+
+    @Override
+    public void write(ApiMessageAndVersion data, Object context, Writable out) 
{
+        ObjectSerializationCache serializationCache = 
(ObjectSerializationCache) context;
+        out.writeUnsignedVarint(data.message().apiKey());
+        out.writeUnsignedVarint(data.version());
+        data.message().write(out, serializationCache, data.version());
+    }
+
+    @Override
+    public ApiMessageAndVersion read(Readable input, int size) {
+        short apiKey = (short) input.readUnsignedVarint();
+        short version = (short) input.readUnsignedVarint();

Review comment:
       I don't feel too strongly about the use of the unsigned type. I have 
seen the varint size functions pop up on some flame graphs, so the concern is 
valid. If it becomes a real issue, one idea would be to precompute the size 
during the code generation. Really we could even hard-code the value 1 here and 
then write a test which fails if there are any api keys/versions which do not 
fit in a single byte.
   
   @cmccabe Yeah, I still think it's probably worth having a version byte or 
something in case we need to change this in the future. We have regretted not 
having a header version in the RPC schema.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to