vcrfxia commented on code in PR #13252:
URL: https://github.com/apache/kafka/pull/13252#discussion_r1114799670


##########
streams/src/main/java/org/apache/kafka/streams/state/internals/MeteredVersionedKeyValueStore.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.streams.state.internals;
+
+import static 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency;
+
+import java.util.Objects;
+import org.apache.kafka.common.serialization.Serde;
+import org.apache.kafka.common.utils.Time;
+import org.apache.kafka.streams.errors.ProcessorStateException;
+import org.apache.kafka.streams.processor.ProcessorContext;
+import org.apache.kafka.streams.processor.StateStore;
+import org.apache.kafka.streams.processor.StateStoreContext;
+import org.apache.kafka.streams.processor.internals.SerdeGetter;
+import org.apache.kafka.streams.query.Position;
+import org.apache.kafka.streams.query.PositionBound;
+import org.apache.kafka.streams.query.Query;
+import org.apache.kafka.streams.query.QueryConfig;
+import org.apache.kafka.streams.query.QueryResult;
+import org.apache.kafka.streams.state.KeyValueStore;
+import org.apache.kafka.streams.state.TimestampedKeyValueStore;
+import org.apache.kafka.streams.state.ValueAndTimestamp;
+import org.apache.kafka.streams.state.VersionedBytesStore;
+import org.apache.kafka.streams.state.VersionedKeyValueStore;
+import org.apache.kafka.streams.state.VersionedRecord;
+
+/**
+ * A metered {@link VersionedKeyValueStore} wrapper that is used for recording 
operation
+ * metrics, and hence its inner {@link VersionedBytesStore} implementation 
does not need to provide
+ * its own metrics collecting functionality. The inner {@code 
VersionedBytesStore} of this class
+ * is a {@link KeyValueStore} of type <Bytes,byte[]>, so we use {@link 
Serde}s
+ * to convert from <K,ValueAndTimestamp<V&gt> to 
<Bytes,byte[]>. In particular,
+ * {@link NullableValueAndTimestampSerde} is used since putting a tombstone to 
a versioned key-value
+ * store requires putting a null value associated with a timestamp.
+ *
+ * @param <K> The key type
+ * @param <V> The (raw) value type
+ */
+public class MeteredVersionedKeyValueStore<K, V>
+    extends WrappedStateStore<VersionedBytesStore, K, V>
+    implements VersionedKeyValueStore<K, V> {
+
+    private final MeteredVersionedKeyValueStoreInternal internal;
+
+    MeteredVersionedKeyValueStore(final VersionedBytesStore inner,
+                                  final String metricScope,
+                                  final Time time,
+                                  final Serde<K> keySerde,
+                                  final Serde<ValueAndTimestamp<V>> 
valueSerde) {
+        super(inner);
+        internal = new MeteredVersionedKeyValueStoreInternal(inner, 
metricScope, time, keySerde, valueSerde);
+    }
+
+    /**
+     * Private helper class which represents the functionality of a {@link 
VersionedKeyValueStore}
+     * as a {@link TimestampedKeyValueStore} so that the bulk of the metering 
logic may be
+     * inherited from {@link MeteredKeyValueStore}. As a result, the 
implementation of
+     * {@link MeteredVersionedKeyValueStore} is a simple wrapper to translate 
from this
+     * {@link TimestampedKeyValueStore} representation of a versioned 
key-value store into the
+     * {@link VersionedKeyValueStore} interface itself.
+     */
+    private class MeteredVersionedKeyValueStoreInternal
+        extends MeteredKeyValueStore<K, ValueAndTimestamp<V>>
+        implements TimestampedKeyValueStore<K, V> {
+
+        private final VersionedBytesStore inner;
+
+        MeteredVersionedKeyValueStoreInternal(final VersionedBytesStore inner,
+                                              final String metricScope,
+                                              final Time time,
+                                              final Serde<K> keySerde,
+                                              final 
Serde<ValueAndTimestamp<V>> valueSerde) {
+            super(inner, metricScope, time, keySerde, valueSerde);
+            this.inner = inner;
+        }
+
+        @Override
+        public void put(final K key, final ValueAndTimestamp<V> value) {
+            super.put(
+                key,
+                // versioned stores require a timestamp associated with all 
puts, including tombstones/deletes
+                value == null
+                    ? ValueAndTimestamp.makeAllowNullable(null, 
context.timestamp())
+                    : value
+            );
+        }
+
+        public ValueAndTimestamp<V> get(final K key, final long asOfTimestamp) 
{
+            Objects.requireNonNull(key, "key cannot be null");
+            try {
+                return maybeMeasureLatency(() -> 
outerValue(inner.get(keyBytes(key), asOfTimestamp)), time, getSensor);
+            } catch (final ProcessorStateException e) {
+                final String message = String.format(e.getMessage(), key);
+                throw new ProcessorStateException(message, e);
+            }
+        }
+
+        public ValueAndTimestamp<V> delete(final K key, final long timestamp) {
+            Objects.requireNonNull(key, "key cannot be null");
+            try {
+                return maybeMeasureLatency(() -> 
outerValue(inner.delete(keyBytes(key), timestamp)), time, deleteSensor);
+            } catch (final ProcessorStateException e) {
+                final String message = String.format(e.getMessage(), key);
+                throw new ProcessorStateException(message, e);
+            }
+        }
+
+        @Override
+        public <R> QueryResult<R> query(final Query<R> query,
+                                        final PositionBound positionBound,
+                                        final QueryConfig config) {
+            final long start = time.nanoseconds();
+            final QueryResult<R> result = wrapped().query(query, 
positionBound, config);
+            if (config.isCollectExecutionInfo()) {
+                result.addExecutionInfo(
+                    "Handled in " + getClass() + " in " + (time.nanoseconds() 
- start) + "ns");
+            }
+            // do not convert query or return types to/from inner bytes store 
to user-friendly types

Review Comment:
   Hm, I think we might be talking about slightly different things. Here's my 
current understanding of how this works for the existing key-value stores today:
   * The user passes a `KeyValueStore<Bytes, byte[]>` implementation (via 
supplier/materializer) which is used as the innermost store. This store only 
knows how to serve queries from bytes, and doesn't know anything about what the 
original key or value types are.
   * The user's store gets wrapped inside some extra layers, including the 
metered layer which knows how to serialize the actual key and value types to 
bytes, and deserialize back.
   * When the user issues a IQv2 query to the store, the query hits the outer 
store (i.e., metered layer), and is passed to the inner layers from there.
   * What this means is that when a user issues an IQv2 `KeyQuery`, the key 
that they pass is the actual value type, and not bytes, even though the 
innermost store implementation they provided only knows about bytes. 
`MeteredKeyValueStore` has 
[logic](https://github.com/apache/kafka/blob/069ce59e1e33f47c000d8cdc247851f2e0a82154/streams/src/main/java/org/apache/kafka/streams/state/internals/MeteredKeyValueStore.java#L297)
 for serializing the key in the `KeyQuery` to bytes, and passing that to the 
inner store, and also deserializing the bytes from the result into the actual 
value type before returning the result to the user. The same thing happens with 
`RangeQuery`, but these are the only two query types which 
`MeteredKeyValueStore` provides serialization/deserialization assistance for. 
All other query types are direct pass-throughs to the inner store.
   
   In order to provide the same convenience for users issuing IQv2 requests to 
versioned stores, `MeteredVersionedKeyValueStore` should also assist in 
serializing/deserializing `KeyQuery` and `RangeQuery` for users, so that they 
can issue `KeyQuery<K, V>` instead of `KeyQuery<Bytes, byte[]>` to their inner 
store, but I don't want to add this support now as part of KIP-889, since it 
should have its own KIP. So if a user wants to issue key queries to their 
custom versioned store implementation today, they will have to use 
`KeyQuery<Bytes, byte[]>`. But then later when we do add support for 
`KeyQuery<K, V>` at the metered store layer, users will have to stop using 
`KeyQuery<Bytes, byte[]>` and instead start using `KeyQuery<K, V>` -- this is 
the compatibility concern I have called out above. Does this extra context help?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to