[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-04-11 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r611267145



##
File path: 
storage/src/main/java/org/apache/kafka/server/log/remote/metadata/storage/RemoteLogMetadataCache.java
##
@@ -0,0 +1,316 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.metadata.storage;
+
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId;
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata;
+import 
org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate;
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentState;
+import 
org.apache.kafka.server.log.remote.storage.RemoteResourceNotFoundException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * This class provides an in-memory cache of remote log segment metadata. This 
maintains the lineage of segments
+ * with respect to leader epochs.
+ * 
+ * Remote log segment can go through the state transitions as mentioned in 
{@link RemoteLogSegmentState}.
+ * 
+ * This class will have all the segments which did not reach terminal state 
viz DELETE_SEGMENT_FINISHED. That means,any
+ * segment reaching the terminal state will get cleared from this instance.
+ * This class provides different methods to fetch segment metadata like {@link 
#remoteLogSegmentMetadata(int, long)},
+ * {@link #highestOffsetForEpoch(int)}, {@link #listRemoteLogSegments(int)}, 
{@link #listAllRemoteLogSegments()}. Those
+ * methods have different semantics to fetch the segment based on its state.
+ * 
+ * 
+ * 
+ * {@link RemoteLogSegmentState#COPY_SEGMENT_STARTED}:
+ * 
+ * Segment in this state indicates it is not yet copied successfully. So, 
these segments will not be
+ * accessible for reads but these are considered for cleanups when a partition 
is deleted.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#COPY_SEGMENT_FINISHED}:
+ * 
+ * Segment in this state indicates it is successfully copied and it is 
available for reads. So, these segments
+ * will be accessible for reads. But this should be available for any cleanup 
activity like deleting segments by the
+ * caller of this class.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#DELETE_SEGMENT_STARTED}:
+ * Segment in this state indicates it is getting deleted. That means, it is 
not available for reads. But it should be
+ * available for any cleanup activity like deleting segments by the caller of 
this class.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#DELETE_SEGMENT_FINISHED}:
+ * Segment in this state indicate it is already deleted. That means, it is not 
available for any activity including
+ * reads or cleanup activity. This cache will clear entries containing this 
state.
+ * 
+ * 
+ *
+ * 
+ *  The below table summarizes whether the segment with the respective state 
are available for the given methods.
+ * 
+ * 
+-+--++-+-+
+ * |  Method / SegmentState  | COPY_SEGMENT_STARTED | 
COPY_SEGMENT_FINISHED  | DELETE_SEGMENT_STARTED  | DELETE_SEGMENT_STARTED  |

Review comment:
   typo: The title of the last column should be `DELETE_SEGMENT_FINISHED`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-04-09 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r610494694



##
File path: 
storage/src/main/java/org/apache/kafka/server/log/remote/metadata/storage/RemoteLogMetadataCache.java
##
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.metadata.storage;
+
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId;
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata;
+import 
org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate;
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentState;
+import 
org.apache.kafka.server.log.remote.storage.RemoteResourceNotFoundException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * This class provides an in-memory cache of remote log segment metadata. This 
maintains the lineage of segments
+ * with respect to leader epochs.
+ * 
+ * Remote log segment can go through the state transitions as mentioned in 
{@link RemoteLogSegmentState}.
+ * 
+ * This class will have all the segments which did not reach terminal state 
viz DELETE_SEGMENT_FINISHED. That means,any
+ * segment reaching the terminal state will get cleared from this instance.
+ * This class provides different methods to fetch segment metadata like {@link 
#remoteLogSegmentMetadata(int, long)},
+ * {@link #highestOffsetForEpoch(int)}, {@link #listRemoteLogSegments(int)}, 
{@link #listAllRemoteLogSegments()}. Those
+ * methods have different semantics to fetch the segment based on its state.
+ * 
+ * 
+ * 
+ * {@link RemoteLogSegmentState#COPY_SEGMENT_STARTED}:
+ * 
+ * Segment in this state indicates it is not yet copied successfully. So, 
these segments will not be
+ * accessible for reads but these are considered for cleanups when a partition 
is deleted.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#COPY_SEGMENT_FINISHED}:
+ * 
+ * Segment in this state indicates it is successfully copied and it is 
available for reads. So, these segments
+ * will be accessible for reads. But this should be available for any cleanup 
activity like deleting segments by the
+ * caller of this class.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#DELETE_SEGMENT_STARTED}:
+ * Segment in this state indicates it is getting deleted. That means, it is 
not available for reads. But it should be
+ * available for any cleanup activity like deleting segments by the caller of 
this class.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#DELETE_SEGMENT_FINISHED}:
+ * Segment in this state indicate it is already deleted. That means, it is not 
available for any activity including
+ * reads or cleanup activity. This cache will clear entries containing this 
state.
+ * 
+ * 
+ *
+ * 
+ *  The below table summarizes whether the segment with the respective state 
are available for the given methods.
+ * 
+ * 
+-+--++-+-+
+ * |  Method / SegmentState  | COPY_SEGMENT_STARTED | 
COPY_SEGMENT_FINISHED  | DELETE_SEGMENT_STARTED  | DELETE_SEGMENT_STARTED  |
+ * 
|-+--++-+-|
+ * | remoteLogSegmentMetadata|No|   Yes
  |  No |   No|
+ * | (int leaderEpoch, long offset)  |  |  
  | | |
+ * 
|-+--++-+-|
+ * | listRemoteLogSegments   |Yes   |   Yes
  |  Yes|   No|
+ * | (int leaderEpoch)   |  

[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-04-09 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r610494457



##
File path: 
storage/src/main/java/org/apache/kafka/server/log/remote/metadata/storage/RemoteLogLeaderEpochState.java
##
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.metadata.storage;
+
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId;
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentSkipListMap;
+
+/**
+ * This class represents the in-memory state of segments associated with a 
leader epoch. This includes the mapping of offset to
+ * segment ids and unreferenced segments which are not mapped to any offset 
but they exist in remote storage.
+ * 
+ * This is used by {@link RemoteLogMetadataCache} to track the segments for 
each leader epoch.
+ */
+class RemoteLogLeaderEpochState {
+
+// It contains offset to segment ids mapping with the segment state as 
COPY_SEGMENT_FINISHED.
+private final NavigableMap offsetToId = new 
ConcurrentSkipListMap<>();
+
+/**
+ * It represents unreferenced segments for this leader epoch. It contains 
the segments still in COPY_SEGMENT_STARTED
+ * and DELETE_SEGMENT_STARTED state or these have been replaced by callers 
with other segments having the same
+ * start offset for the leader epoch. These will be returned by {@link 
RemoteLogMetadataCache#listAllRemoteLogSegments()}
+ * and {@link RemoteLogMetadataCache#listRemoteLogSegments(int 
leaderEpoch)} so that callers can clean them up if
+ * they still exist. These will be cleaned from the cache once they reach 
DELETE_SEGMENT_FINISHED state.
+ */
+private final Set unreferencedSegmentIds = 
ConcurrentHashMap.newKeySet();
+
+// It represents the highest log offset of the segments that were updated 
with updateHighestLogOffset.
+private volatile Long highestLogOffset;
+
+/**
+ * Returns all the segments associated with this leader epoch sorted by 
start offset in ascending order.
+ *
+ * @param idToSegmentMetadata mapping of id to segment metadata. This will 
be used to get RemoteLogSegmentMetadata
+ *for an id to be used for sorting.
+ */
+Iterator 
listAllRemoteLogSegments(Map 
idToSegmentMetadata) {
+// Return all the segments including unreferenced metadata.
+int size = offsetToId.size() + unreferencedSegmentIds.size();
+if (size == 0) {
+return Collections.emptyIterator();
+}
+
+ArrayList metadataList = new 
ArrayList<>(size);
+for (RemoteLogSegmentId id : offsetToId.values()) {
+metadataList.add(idToSegmentMetadata.get(id));
+}
+
+if (!unreferencedSegmentIds.isEmpty()) {
+for (RemoteLogSegmentId id : unreferencedSegmentIds) {
+metadataList.add(idToSegmentMetadata.get(id));
+}
+
+// sort only when unreferenced entries exist as they are already 
sorted in offsetToId.
+
metadataList.sort(Comparator.comparingLong(RemoteLogSegmentMetadata::startOffset));
+}
+
+return metadataList.iterator();
+}
+
+void handleSegmentWithCopySegmentStartedState(RemoteLogSegmentId 
remoteLogSegmentId) {
+// Add this to unreferenced set of segments for the respective leader 
epoch.
+unreferencedSegmentIds.add(remoteLogSegmentId);

Review comment:
   Ok, I think this is fine then.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-04-09 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r610491754



##
File path: clients/src/test/java/org/apache/kafka/test/TestUtils.java
##
@@ -535,4 +536,46 @@ public static void setFieldValue(Object obj, String 
fieldName, Object value) thr
 field.setAccessible(true);
 field.set(obj, value);
 }
+
+/**
+ * Returns true if both iterators have same elements in the same order.
+ *
+ * @param iterator1 first iterator.
+ * @param iterator2 second iterator.
+ * @paramtype of element in the iterators.
+ */
+public static  boolean sameElementsWithOrder(Iterator iterator1,

Review comment:
   Sounds good




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-04-08 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r610305248



##
File path: 
storage/src/main/java/org/apache/kafka/server/log/remote/metadata/storage/RemoteLogMetadataCache.java
##
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.metadata.storage;
+
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId;
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata;
+import 
org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate;
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentState;
+import 
org.apache.kafka.server.log.remote.storage.RemoteResourceNotFoundException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * This class provides an in-memory cache of remote log segment metadata. This 
maintains the lineage of segments
+ * with respect to leader epochs.
+ * 
+ * Remote log segment can go through the state transitions as mentioned in 
{@link RemoteLogSegmentState}.
+ * 
+ * This class will have all the segments which did not reach terminal state 
viz DELETE_SEGMENT_FINISHED. That means,any
+ * segment reaching the terminal state will get cleared from this instance.
+ * This class provides different methods to fetch segment metadata like {@link 
#remoteLogSegmentMetadata(int, long)},
+ * {@link #highestOffsetForEpoch(int)}, {@link #listRemoteLogSegments(int)}, 
{@link #listAllRemoteLogSegments()}. Those
+ * methods have different semantics to fetch the segment based on its state.
+ * 
+ * 
+ * 
+ * {@link RemoteLogSegmentState#COPY_SEGMENT_STARTED}:
+ * 
+ * Segment in this state indicates it is not yet copied successfully. So, 
these segments will not be
+ * accessible for reads but these are considered for cleanups when a partition 
is deleted.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#COPY_SEGMENT_FINISHED}:
+ * 
+ * Segment in this state indicates it is successfully copied and it is 
available for reads. So, these segments
+ * will be accessible for reads. But this should be available for any cleanup 
activity like deleting segments by the
+ * caller of this class.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#DELETE_SEGMENT_STARTED}:
+ * Segment in this state indicates it is getting deleted. That means, it is 
not available for reads. But it should be
+ * available for any cleanup activity like deleting segments by the caller of 
this class.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#DELETE_SEGMENT_FINISHED}:
+ * Segment in this state indicate it is already deleted. That means, it is not 
available for any activity including
+ * reads or cleanup activity. This cache will clear entries containing this 
state.
+ * 
+ * 
+ *
+ * 
+ *  The below table summarizes whether the segment with the respective state 
are available for the given methods.
+ * 
+ * 
+-+--++-+-+
+ * |  Method / SegmentState  | COPY_SEGMENT_STARTED | 
COPY_SEGMENT_FINISHED  | DELETE_SEGMENT_STARTED  | DELETE_SEGMENT_STARTED  |
+ * 
|-+--++-+-|
+ * | remoteLogSegmentMetadata|No|   Yes
  |  No |   No|
+ * | (int leaderEpoch, long offset)  |  |  
  | | |
+ * 
|-+--++-+-|
+ * | listRemoteLogSegments   |Yes   |   Yes
  |  Yes|   No|
+ * | (int leaderEpoch)   |  

[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-04-08 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r610305248



##
File path: 
storage/src/main/java/org/apache/kafka/server/log/remote/metadata/storage/RemoteLogMetadataCache.java
##
@@ -0,0 +1,309 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.metadata.storage;
+
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId;
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata;
+import 
org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate;
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentState;
+import 
org.apache.kafka.server.log.remote.storage.RemoteResourceNotFoundException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * This class provides an in-memory cache of remote log segment metadata. This 
maintains the lineage of segments
+ * with respect to leader epochs.
+ * 
+ * Remote log segment can go through the state transitions as mentioned in 
{@link RemoteLogSegmentState}.
+ * 
+ * This class will have all the segments which did not reach terminal state 
viz DELETE_SEGMENT_FINISHED. That means,any
+ * segment reaching the terminal state will get cleared from this instance.
+ * This class provides different methods to fetch segment metadata like {@link 
#remoteLogSegmentMetadata(int, long)},
+ * {@link #highestOffsetForEpoch(int)}, {@link #listRemoteLogSegments(int)}, 
{@link #listAllRemoteLogSegments()}. Those
+ * methods have different semantics to fetch the segment based on its state.
+ * 
+ * 
+ * 
+ * {@link RemoteLogSegmentState#COPY_SEGMENT_STARTED}:
+ * 
+ * Segment in this state indicates it is not yet copied successfully. So, 
these segments will not be
+ * accessible for reads but these are considered for cleanups when a partition 
is deleted.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#COPY_SEGMENT_FINISHED}:
+ * 
+ * Segment in this state indicates it is successfully copied and it is 
available for reads. So, these segments
+ * will be accessible for reads. But this should be available for any cleanup 
activity like deleting segments by the
+ * caller of this class.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#DELETE_SEGMENT_STARTED}:
+ * Segment in this state indicates it is getting deleted. That means, it is 
not available for reads. But it should be
+ * available for any cleanup activity like deleting segments by the caller of 
this class.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#DELETE_SEGMENT_FINISHED}:
+ * Segment in this state indicate it is already deleted. That means, it is not 
available for any activity including
+ * reads or cleanup activity. This cache will clear entries containing this 
state.
+ * 
+ * 
+ *
+ * 
+ *  The below table summarizes whether the segment with the respective state 
are available for the given methods.
+ * 
+ * 
+-+--++-+-+
+ * |  Method / SegmentState  | COPY_SEGMENT_STARTED | 
COPY_SEGMENT_FINISHED  | DELETE_SEGMENT_STARTED  | DELETE_SEGMENT_STARTED  |
+ * 
|-+--++-+-|
+ * | remoteLogSegmentMetadata|No|   Yes
  |  No |   No|
+ * | (int leaderEpoch, long offset)  |  |  
  | | |
+ * 
|-+--++-+-|
+ * | listRemoteLogSegments   |Yes   |   Yes
  |  Yes|   No|
+ * | (int leaderEpoch)   |  

[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-04-08 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r610293058



##
File path: 
storage/src/main/java/org/apache/kafka/server/log/remote/metadata/storage/RemoteLogLeaderEpochState.java
##
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.metadata.storage;
+
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId;
+import org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentSkipListMap;
+
+/**
+ * This class represents the in-memory state of segments associated with a 
leader epoch. This includes the mapping of offset to
+ * segment ids and unreferenced segments which are not mapped to any offset 
but they exist in remote storage.
+ * 
+ * This is used by {@link RemoteLogMetadataCache} to track the segments for 
each leader epoch.
+ */
+class RemoteLogLeaderEpochState {
+
+// It contains offset to segment ids mapping with the segment state as 
COPY_SEGMENT_FINISHED.
+private final NavigableMap offsetToId = new 
ConcurrentSkipListMap<>();
+
+/**
+ * It represents unreferenced segments for this leader epoch. It contains 
the segments still in COPY_SEGMENT_STARTED
+ * and DELETE_SEGMENT_STARTED state or these have been replaced by callers 
with other segments having the same
+ * start offset for the leader epoch. These will be returned by {@link 
RemoteLogMetadataCache#listAllRemoteLogSegments()}
+ * and {@link RemoteLogMetadataCache#listRemoteLogSegments(int 
leaderEpoch)} so that callers can clean them up if
+ * they still exist. These will be cleaned from the cache once they reach 
DELETE_SEGMENT_FINISHED state.
+ */
+private final Set unreferencedSegmentIds = 
ConcurrentHashMap.newKeySet();
+
+// It represents the highest log offset of the segments that were updated 
with updateHighestLogOffset.
+private volatile Long highestLogOffset;
+
+/**
+ * Returns all the segments associated with this leader epoch sorted by 
start offset in ascending order.
+ *
+ * @param idToSegmentMetadata mapping of id to segment metadata. This will 
be used to get RemoteLogSegmentMetadata
+ *for an id to be used for sorting.
+ */
+Iterator 
listAllRemoteLogSegments(Map 
idToSegmentMetadata) {
+// Return all the segments including unreferenced metadata.
+int size = offsetToId.size() + unreferencedSegmentIds.size();
+if (size == 0) {
+return Collections.emptyIterator();
+}
+
+ArrayList metadataList = new 
ArrayList<>(size);
+for (RemoteLogSegmentId id : offsetToId.values()) {
+metadataList.add(idToSegmentMetadata.get(id));

Review comment:
   Hmm here we assume that `id` should be present in the provided 
`idToSegmentMetadata`. Due to programming error, or other reasons, the caller 
may not be able to ensure this. Would it be safer if we instead threw whenever 
`id` is absent in `idToSegmentMetadata`  to catch that case?

##
File path: clients/src/test/java/org/apache/kafka/test/TestUtils.java
##
@@ -535,4 +536,46 @@ public static void setFieldValue(Object obj, String 
fieldName, Object value) thr
 field.setAccessible(true);
 field.set(obj, value);
 }
+
+/**
+ * Returns true if both iterators have same elements in the same order.
+ *
+ * @param iterator1 first iterator.
+ * @param iterator2 second iterator.
+ * @paramtype of element in the iterators.
+ */
+public static  boolean sameElementsWithOrder(Iterator iterator1,

Review comment:
   Here is a slightly simpler version:
   ```
while (iterator1.hasNext() && iterator2.hasNext()) {
if (!Objects.equals(iterator1.next(), iterator2.next())) {
   return false;
   }
   }
   
   return 

[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-04-08 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r609998576



##
File path: 
remote-storage/src/main/java/org/apache/kafka/server/log/remote/storage/RemoteLogMetadataCache.java
##
@@ -0,0 +1,305 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.storage;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * This class provides an in-memory cache of remote log segment metadata. This 
maintains the lineage of segments
+ * with respect to leader epochs.
+ * 
+ * Remote log segment can go through the state transitions as mentioned in 
{@link RemoteLogSegmentState}.
+ * 
+ * This class will have all the segments which did not reach terminal state 
viz DELETE_SEGMENT_FINISHED. That means,any
+ * segment reaching the terminal state will get cleared from this instance.
+ * This class provides different methods to fetch segment metadata like {@link 
#remoteLogSegmentMetadata(int, long)},
+ * {@link #highestOffsetForEpoch(int)}, {@link #listRemoteLogSegments(int)}, 
{@link #listAllRemoteLogSegments()}. Those
+ * methods have different semantics to fetch the segment based on its state.
+ * 
+ * 
+ * 
+ * {@link RemoteLogSegmentState#COPY_SEGMENT_STARTED}:
+ * 
+ * Segment in this state indicates it is not yet copied successfully. So, 
these segments will not be
+ * accessible for reads but these are considered for cleanups when a partition 
is deleted.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#COPY_SEGMENT_FINISHED}:
+ * 
+ * Segment in this state indicates it is successfully copied and it is 
available for reads. So, these segments
+ * will be accessible for reads. But this should be available for any cleanup 
activity like deleting segments by the
+ * caller of this class.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#DELETE_SEGMENT_STARTED}:
+ * Segment in this state indicates it is getting deleted. That means, it is 
not available for reads. But it should be
+ * available for any cleanup activity like deleting segments by the caller of 
this class.
+ * 
+ * 
+ * {@link RemoteLogSegmentState#DELETE_SEGMENT_FINISHED}:
+ * Segment in this state indicate it is already deleted. That means, it is not 
available for any activity including
+ * reads or cleanup activity. This cache will clear entries containing this 
state.
+ * 
+ * 
+ *
+ * 
+ * 
+ * 
+-+--++-+-+
+ * | | COPY_SEGMENT_STARTED | 
COPY_SEGMENT_FINISHED  | DELETE_SEGMENT_STARTED  | DELETE_SEGMENT_STARTED  |
+ * 
|-+--++-+-|
+ * | remoteLogSegmentMetadata|No|   Yes
  |  No |   No|
+ * | (int leaderEpoch, long offset)  |  |  
  | | |
+ * 
|-+--++-+-|
+ * | listRemoteLogSegments   |Yes   |   Yes
  |  Yes|   No|
+ * | (int leaderEpoch)   |  |  
  | | |
+ * 
|-+--++-+-|
+ * | listAllRemoteLogSegments()  |Yes   |   Yes
  |  Yes|   No|
+ * | |  |  
  | | |
+ 

[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-04-06 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r607959239



##
File path: clients/src/test/java/org/apache/kafka/test/TestUtils.java
##
@@ -535,4 +536,48 @@ public static void setFieldValue(Object obj, String 
fieldName, Object value) thr
 field.setAccessible(true);
 field.set(obj, value);
 }
+
+/**
+ * Returns true if both iterators have same elements in the same order.
+ *
+ * @param iterator1 first iterator.
+ * @param iterator2 second iterator.
+ * @paramtype of element in the iterators.
+ * @return

Review comment:
   nit: remove empty `@return`

##
File path: clients/src/test/java/org/apache/kafka/test/TestUtils.java
##
@@ -535,4 +536,48 @@ public static void setFieldValue(Object obj, String 
fieldName, Object value) thr
 field.setAccessible(true);
 field.set(obj, value);
 }
+
+/**
+ * Returns true if both iterators have same elements in the same order.
+ *
+ * @param iterator1 first iterator.
+ * @param iterator2 second iterator.
+ * @paramtype of element in the iterators.
+ * @return
+ */
+public static  boolean sameElementsWithOrder(Iterator iterator1,
+Iterator iterator2) {
+while (iterator1.hasNext()) {
+if (!iterator2.hasNext()) {
+return false;
+}
+
+Object elem1 = iterator1.next();
+Object elem2 = iterator2.next();
+if (!Objects.equals(elem1, elem2)) {
+return false;
+}
+}
+
+return !iterator2.hasNext();
+}
+
+/**
+ * Returns true if both the iterators have same set of elements 
irrespective of order and duplicates.
+ *
+ * @param iterator1 first iterator.
+ * @param iterator2 second iterator.
+ * @paramtype of element in the iterators.
+ * @return

Review comment:
   nit: remove empty `@return`

##
File path: 
clients/src/main/java/org/apache/kafka/server/log/remote/storage/RemoteLogSegmentState.java
##
@@ -87,4 +89,27 @@ public byte id() {
 public static RemoteLogSegmentState forId(byte id) {
 return STATE_TYPES.get(id);
 }
+
+public static boolean isValidTransition(RemoteLogSegmentState srcState, 
RemoteLogSegmentState targetState) {
+Objects.requireNonNull(targetState, "targetState can not be null");
+
+if (srcState == null) {

Review comment:
   Same comment as before: 
https://github.com/apache/kafka/pull/10218/files#r598982742.
   Can srcState be null in practice? If not, this can be defined as an instance 
method.

##
File path: 
remote-storage/src/main/java/org/apache/kafka/server/log/remote/storage/RemoteLogMetadataCache.java
##
@@ -0,0 +1,305 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.storage;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * This class provides an in-memory cache of remote log segment metadata. This 
maintains the lineage of segments
+ * with respect to leader epochs.
+ * 
+ * Remote log segment can go through the state transitions as mentioned in 
{@link RemoteLogSegmentState}.
+ * 
+ * This class will have all the segments which did not reach terminal state 
viz DELETE_SEGMENT_FINISHED. That means,any
+ * segment reaching the terminal state will get cleared from this instance.
+ * This class provides different methods to fetch segment metadata like {@link 
#remoteLogSegmentMetadata(int, long)},
+ * {@link #highestOffsetForEpoch(int)}, {@link #listRemoteLogSegments(int)}, 
{@link #listAllRemoteLogSegments()}. Those
+ * methods have different semantics to fetch the segment based on its state.
+ * 
+ * 
+ * 
+ * {@link RemoteLogSegmentState#COPY_SEGMENT_STARTED}:
+ * 
+ 

[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-03-22 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r599001851



##
File path: 
remote-storage/src/main/java/org/apache/kafka/server/log/remote/storage/RemoteLogMetadataCache.java
##
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.storage;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.stream.Collectors;
+
+/**
+ * This class provides an inmemory cache of remote log segment metadata. This 
maintains the lineage of segments
+ * with respect to epoch evolution. It also keeps track of segments which are 
not considered to be copied to remote
+ * storage.
+ */
+public class RemoteLogMetadataCache {
+private static final Logger log = 
LoggerFactory.getLogger(RemoteLogMetadataCache.class);
+
+private final ConcurrentMap 
idToSegmentMetadata
+= new ConcurrentHashMap<>();
+
+// It keeps the segments which are not yet reached to 
COPY_SEGMENT_FINISHED state.
+private final Set remoteLogSegmentIdInProgress = new 
HashSet<>();
+
+// It will have all the segments except with state as COPY_SEGMENT_STARTED.
+private final ConcurrentMap> leaderEpochToOffsetToId
+= new ConcurrentHashMap<>();
+
+private void addRemoteLogSegmentMetadata(RemoteLogSegmentMetadata 
remoteLogSegmentMetadata) {
+log.debug("Adding remote log segment metadata: [{}]", 
remoteLogSegmentMetadata);
+idToSegmentMetadata.put(remoteLogSegmentMetadata.remoteLogSegmentId(), 
remoteLogSegmentMetadata);
+Map leaderEpochToOffset = 
remoteLogSegmentMetadata.segmentLeaderEpochs();
+for (Map.Entry entry : leaderEpochToOffset.entrySet()) {
+leaderEpochToOffsetToId.computeIfAbsent(entry.getKey(), k -> new 
ConcurrentSkipListMap<>())
+.put(entry.getValue(), 
remoteLogSegmentMetadata.remoteLogSegmentId());
+}
+}
+
+public Optional remoteLogSegmentMetadata(int 
leaderEpoch, long offset) {
+NavigableMap offsetToId = 
leaderEpochToOffsetToId.get(leaderEpoch);
+if (offsetToId == null || offsetToId.isEmpty()) {
+return Optional.empty();
+}
+
+// look for floor entry as the given offset may exist in this entry.
+Map.Entry entry = 
offsetToId.floorEntry(offset);
+if (entry == null) {
+// if the offset is lower than the minimum offset available in 
metadata then return empty.
+return Optional.empty();
+}
+
+RemoteLogSegmentMetadata metadata = 
idToSegmentMetadata.get(entry.getValue());
+// check whether the given offset with leaderEpoch exists in this 
segment.
+// check for epoch's offset boundaries with in this segment.
+//  1. get the next epoch's start offset -1 if exists
+//  2. if no next epoch exists, then segment end offset can be 
considered as epoch's relative end offset.
+Map.Entry nextEntry = metadata.segmentLeaderEpochs()
+.higherEntry(leaderEpoch);
+long epochEndOffset = (nextEntry != null) ? nextEntry.getValue() - 1 : 
metadata.endOffset();
+
+// seek offset should be <= epoch's end offset.
+return (offset > epochEndOffset) ? Optional.empty() : 
Optional.of(metadata);
+}
+
+public void updateRemoteLogSegmentMetadata(RemoteLogSegmentMetadataUpdate 
metadataUpdate)
+throws RemoteResourceNotFoundException {
+log.debug("Updating remote log segment metadata: [{}]", 
metadataUpdate);
+RemoteLogSegmentId remoteLogSegmentId = 
metadataUpdate.remoteLogSegmentId();
+RemoteLogSegmentMetadata existingMetadata = 
idToSegmentMetadata.get(remoteLogSegmentId);
+if (existingMetadata == null) {
+

[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-03-22 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r598964030



##
File path: 
clients/src/main/java/org/apache/kafka/server/log/remote/storage/RemoteLogSegmentState.java
##
@@ -21,14 +21,16 @@
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.Map;
+import java.util.Objects;
 import java.util.function.Function;
 import java.util.stream.Collectors;
 
 /**
  * It indicates the state of the remote log segment. This will be based on the 
action executed on this

Review comment:
   You can drop `It` and start with `Indicates the state...`.

##
File path: 
remote-storage/src/main/java/org/apache/kafka/server/log/remote/storage/InmemoryRemoteLogMetadataManager.java
##
@@ -0,0 +1,185 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.storage;
+
+import org.apache.kafka.common.TopicIdPartition;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * This class is an implementation of {@link RemoteLogMetadataManager} backed 
by inmemory store.
+ */
+public class InmemoryRemoteLogMetadataManager implements 
RemoteLogMetadataManager {
+private static final Logger log = 
LoggerFactory.getLogger(InmemoryRemoteLogMetadataManager.class);
+
+private final ConcurrentMap idToPartitionDeleteMetadata =
+new ConcurrentHashMap<>();
+
+private final ConcurrentMap 
partitionToRemoteLogMetadataCache =
+new ConcurrentHashMap<>();
+
+@Override
+public void addRemoteLogSegmentMetadata(RemoteLogSegmentMetadata 
remoteLogSegmentMetadata)
+throws RemoteStorageException {
+log.debug("Adding remote log segment : [{}]", 
remoteLogSegmentMetadata);
+Objects.requireNonNull(remoteLogSegmentMetadata, 
"remoteLogSegmentMetadata can not be null");
+
+// this method is allowed only to add remote log segment with the 
initial state(which is RemoteLogSegmentState.COPY_SEGMENT_STARTED)
+// but not to update the existing remote log segment metadata.
+if (remoteLogSegmentMetadata.state() != 
RemoteLogSegmentState.COPY_SEGMENT_STARTED) {

Review comment:
   Can this be checked inside `RemoteLogMetadataCache.addToInProgress()` 
instead of here?

##
File path: 
remote-storage/src/test/java/org/apache/kafka/server/log/remote/storage/InmemoryRemoteStorageManager.java
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.storage;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.file.Files;
+import java.util.Collections;
+import java.util.Map;
+import java.util.Objects;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * This class is an implementation of {@link RemoteStorageManager} backed by 
inmemory store.
+ */
+public class InmemoryRemoteStorageManager implements RemoteStorageManager {
+private static final Logger log = 
LoggerFactory.getLogger(InmemoryRemoteStorageManager.class);
+
+// map of key to log data, 

[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-03-17 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r595492577



##
File path: 
clients/src/main/java/org/apache/kafka/server/log/remote/storage/RemoteLogSegmentState.java
##
@@ -87,4 +89,27 @@ public byte id() {
 public static RemoteLogSegmentState forId(byte id) {
 return STATE_TYPES.get(id);
 }
+
+public static boolean isValidTransition(RemoteLogSegmentState srcState, 
RemoteLogSegmentState targetState) {
+Objects.requireNonNull(targetState, "targetState can not be null");
+
+if (srcState == null) {
+// If the source state is null, check the target state as the 
initial state viz DELETE_PARTITION_MARKED
+// Wanted to keep this logic simple here by taking null for 
srcState, instead of creating one more state like
+// COPY_SEGMENT_NOT_STARTED and have the null check by caller and 
pass that state.
+return targetState == COPY_SEGMENT_STARTED;
+} else if (srcState == targetState) {

Review comment:
   1. Will it be useful to place the implementation of this validation in a 
separate module, so that it can be reused with `RLMMWithTopicStorage` in the 
future?
   2. Suggestion from the standpoint of code readability/efficiency: Would it 
make sense to replace the `if-else` logic by looking up from a `Map< 
RemoteLogSegmentState, Set< RemoteLogSegmentState>>` where key is the source 
state and value is a set of allowed target states?
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-03-16 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r595493561



##
File path: 
clients/src/main/java/org/apache/kafka/server/log/remote/storage/RemotePartitionDeleteState.java
##
@@ -83,4 +85,25 @@ public static RemotePartitionDeleteState forId(byte id) {
 return STATE_TYPES.get(id);
 }
 
+public static boolean isValidTransition(RemotePartitionDeleteState 
srcState,

Review comment:
   I have the same suggestions from `RemoteLogSegmentState` for this as 
well. Please refer to this comment: 
https://github.com/apache/kafka/pull/10218#discussion_r595492577





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-03-16 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r595492577



##
File path: 
clients/src/main/java/org/apache/kafka/server/log/remote/storage/RemoteLogSegmentState.java
##
@@ -87,4 +89,27 @@ public byte id() {
 public static RemoteLogSegmentState forId(byte id) {
 return STATE_TYPES.get(id);
 }
+
+public static boolean isValidTransition(RemoteLogSegmentState srcState, 
RemoteLogSegmentState targetState) {
+Objects.requireNonNull(targetState, "targetState can not be null");
+
+if (srcState == null) {
+// If the source state is null, check the target state as the 
initial state viz DELETE_PARTITION_MARKED
+// Wanted to keep this logic simple here by taking null for 
srcState, instead of creating one more state like
+// COPY_SEGMENT_NOT_STARTED and have the null check by caller and 
pass that state.
+return targetState == COPY_SEGMENT_STARTED;
+} else if (srcState == targetState) {

Review comment:
   1. Will it be useful to place the implementation of this validation in a 
separate module, so that it can be reused with `RLMMWithTopicStorage` in the 
future?
   2. Suggestion from the standpoint of code readability: Would it make sense 
to replace the `if-else` logic by looking up from a `Map< 
RemoteLogSegmentState, Set< RemoteLogSegmentState>>` where key is the source 
state and value is a set of allowed target states?
   
   

##
File path: 
remote-storage/src/main/java/org/apache/kafka/server/log/remote/storage/InmemoryRemoteLogMetadataManager.java
##
@@ -0,0 +1,185 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.storage;
+
+import org.apache.kafka.common.TopicIdPartition;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * This class is an implementation of {@link RemoteLogMetadataManager} backed 
by inmemory store.
+ */
+public class InmemoryRemoteLogMetadataManager implements 
RemoteLogMetadataManager {

Review comment:
   We may want to think more about the locking semantics for this class and 
`RemoteLogMetadataCache`. 
   Are we sure there would _not_ be use cases where we need to serialize 
mutations across the individually thread-safe attributes? If the answer is no, 
then using a fine-grained `Object` lock makes more sense because we can use it 
to guard critical sections.
   
   Should we evaluate this upfront?
   
   cc @junrao 

##
File path: 
clients/src/main/java/org/apache/kafka/server/log/remote/storage/RemotePartitionDeleteState.java
##
@@ -83,4 +85,25 @@ public static RemotePartitionDeleteState forId(byte id) {
 return STATE_TYPES.get(id);
 }
 
+public static boolean isValidTransition(RemotePartitionDeleteState 
srcState,

Review comment:
   I have the same suggestions from `RemoteLogSegmentState` for this as 
well. Please refer to this comment:

##
File path: 
remote-storage/src/main/java/org/apache/kafka/server/log/remote/storage/InmemoryRemoteLogMetadataManager.java
##
@@ -0,0 +1,185 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the 

[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-03-10 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r592022166



##
File path: 
remote-storage/src/main/java/org/apache/kafka/server/log/remote/storage/InmemoryRemoteLogMetadataManager.java
##
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.storage;
+
+import org.apache.kafka.common.TopicIdPartition;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * This class is an implementation of {@link RemoteLogMetadataManager} backed 
by inmemory store.
+ */
+public class InmemoryRemoteLogMetadataManager implements 
RemoteLogMetadataManager {
+private static final Logger log = 
LoggerFactory.getLogger(InmemoryRemoteLogMetadataManager.class);
+
+private final ConcurrentMap idToPartitionDeleteMetadata =
+new ConcurrentHashMap<>();
+
+private final ConcurrentMap 
partitionToRemoteLogMetadataCache =
+new ConcurrentHashMap<>();
+
+@Override
+public void addRemoteLogSegmentMetadata(RemoteLogSegmentMetadata 
remoteLogSegmentMetadata)
+throws RemoteStorageException {
+Objects.requireNonNull(remoteLogSegmentMetadata, 
"remoteLogSegmentMetadata can not be null");
+
+// this method is allowed only to add remote log segment with the 
initial state(which is RemoteLogSegmentState.COPY_SEGMENT_STARTED)
+// but not to update the existing remote log segment metadata.
+if (remoteLogSegmentMetadata.state() != 
RemoteLogSegmentState.COPY_SEGMENT_STARTED) {
+throw new IllegalArgumentException("Given remoteLogSegmentMetadata 
should have state as " + RemoteLogSegmentState.COPY_SEGMENT_STARTED
++ " but it contains state as: " + 
remoteLogSegmentMetadata.state());
+}
+
+log.debug("Adding remote log segment : [{}]", 
remoteLogSegmentMetadata);
+
+RemoteLogSegmentId remoteLogSegmentId = 
remoteLogSegmentMetadata.remoteLogSegmentId();
+
+RemoteLogMetadataCache remoteLogMetadataCache = 
partitionToRemoteLogMetadataCache
+.computeIfAbsent(remoteLogSegmentId.topicIdPartition(), id -> 
new RemoteLogMetadataCache());
+
+remoteLogMetadataCache.addToInProgress(remoteLogSegmentMetadata);
+}
+
+@Override
+public void updateRemoteLogSegmentMetadata(RemoteLogSegmentMetadataUpdate 
rlsmUpdate)
+throws RemoteStorageException {
+Objects.requireNonNull(rlsmUpdate, "rlsmUpdate can not be null");
+
+// Callers should use putRemoteLogSegmentMetadata to add 
RemoteLogSegmentMetadata with state as
+// RemoteLogSegmentState.COPY_SEGMENT_STARTED.
+if (rlsmUpdate.state() == RemoteLogSegmentState.COPY_SEGMENT_STARTED) {
+throw new IllegalArgumentException("Given remoteLogSegmentMetadata 
should not have the state as: "
+   + 
RemoteLogSegmentState.COPY_SEGMENT_STARTED);
+}
+log.debug("Updating remote log segment: [{}]", rlsmUpdate);
+RemoteLogSegmentId remoteLogSegmentId = 
rlsmUpdate.remoteLogSegmentId();
+TopicIdPartition topicIdPartition = 
remoteLogSegmentId.topicIdPartition();
+RemoteLogMetadataCache remoteLogMetadataCache = 
partitionToRemoteLogMetadataCache.get(topicIdPartition);
+if (remoteLogMetadataCache == null) {
+throw new RemoteResourceNotFoundException("No partition metadata 
found for : " + topicIdPartition);
+}
+
+remoteLogMetadataCache.updateRemoteLogSegmentMetadata(rlsmUpdate);
+}
+
+@Override
+public Optional 
remoteLogSegmentMetadata(TopicIdPartition topicIdPartition,
+   long 
offset,
+   int 
epochForOffset)
+throws RemoteStorageException {
+

[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-03-10 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r592124191



##
File path: 
remote-storage/src/main/java/org/apache/kafka/server/log/remote/storage/InmemoryRemoteLogMetadataManager.java
##
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.storage;
+
+import org.apache.kafka.common.TopicIdPartition;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+/**
+ * This class is an implementation of {@link RemoteLogMetadataManager} backed 
by inmemory store.
+ */
+public class InmemoryRemoteLogMetadataManager implements 
RemoteLogMetadataManager {
+private static final Logger log = 
LoggerFactory.getLogger(InmemoryRemoteLogMetadataManager.class);
+
+private final ConcurrentMap idToPartitionDeleteMetadata =
+new ConcurrentHashMap<>();
+
+private final ConcurrentMap 
partitionToRemoteLogMetadataCache =
+new ConcurrentHashMap<>();
+
+@Override
+public void addRemoteLogSegmentMetadata(RemoteLogSegmentMetadata 
remoteLogSegmentMetadata)
+throws RemoteStorageException {
+Objects.requireNonNull(remoteLogSegmentMetadata, 
"remoteLogSegmentMetadata can not be null");
+
+// this method is allowed only to add remote log segment with the 
initial state(which is RemoteLogSegmentState.COPY_SEGMENT_STARTED)
+// but not to update the existing remote log segment metadata.
+if (remoteLogSegmentMetadata.state() != 
RemoteLogSegmentState.COPY_SEGMENT_STARTED) {
+throw new IllegalArgumentException("Given remoteLogSegmentMetadata 
should have state as " + RemoteLogSegmentState.COPY_SEGMENT_STARTED
++ " but it contains state as: " + 
remoteLogSegmentMetadata.state());
+}
+
+log.debug("Adding remote log segment : [{}]", 
remoteLogSegmentMetadata);
+
+RemoteLogSegmentId remoteLogSegmentId = 
remoteLogSegmentMetadata.remoteLogSegmentId();
+
+RemoteLogMetadataCache remoteLogMetadataCache = 
partitionToRemoteLogMetadataCache
+.computeIfAbsent(remoteLogSegmentId.topicIdPartition(), id -> 
new RemoteLogMetadataCache());
+
+remoteLogMetadataCache.addToInProgress(remoteLogSegmentMetadata);
+}
+
+@Override
+public void updateRemoteLogSegmentMetadata(RemoteLogSegmentMetadataUpdate 
rlsmUpdate)
+throws RemoteStorageException {
+Objects.requireNonNull(rlsmUpdate, "rlsmUpdate can not be null");
+
+// Callers should use putRemoteLogSegmentMetadata to add 
RemoteLogSegmentMetadata with state as
+// RemoteLogSegmentState.COPY_SEGMENT_STARTED.
+if (rlsmUpdate.state() == RemoteLogSegmentState.COPY_SEGMENT_STARTED) {
+throw new IllegalArgumentException("Given remoteLogSegmentMetadata 
should not have the state as: "
+   + 
RemoteLogSegmentState.COPY_SEGMENT_STARTED);
+}
+log.debug("Updating remote log segment: [{}]", rlsmUpdate);
+RemoteLogSegmentId remoteLogSegmentId = 
rlsmUpdate.remoteLogSegmentId();
+TopicIdPartition topicIdPartition = 
remoteLogSegmentId.topicIdPartition();
+RemoteLogMetadataCache remoteLogMetadataCache = 
partitionToRemoteLogMetadataCache.get(topicIdPartition);
+if (remoteLogMetadataCache == null) {
+throw new RemoteResourceNotFoundException("No partition metadata 
found for : " + topicIdPartition);
+}
+
+remoteLogMetadataCache.updateRemoteLogSegmentMetadata(rlsmUpdate);
+}
+
+@Override
+public Optional 
remoteLogSegmentMetadata(TopicIdPartition topicIdPartition,
+   long 
offset,
+   int 
epochForOffset)
+throws RemoteStorageException {
+

[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-03-10 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r592123658



##
File path: 
remote-storage/src/main/java/org/apache/kafka/server/log/remote/storage/RemoteLogMetadataCache.java
##
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.storage;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.stream.Collectors;
+
+/**
+ * This class provides an inmemory cache of remote log segment metadata. This 
maintains the lineage of segments
+ * with respect to epoch evolution. It also keeps track of segments which are 
not considered to be copied to remote
+ * storage.
+ */
+public class RemoteLogMetadataCache {
+private static final Logger log = 
LoggerFactory.getLogger(RemoteLogMetadataCache.class);
+
+private final ConcurrentMap 
idToSegmentMetadata
+= new ConcurrentHashMap<>();
+
+private final Set remoteLogSegmentIdInProgress = new 
HashSet<>();
+
+private final ConcurrentMap> leaderEpochToOffsetToId
+= new ConcurrentHashMap<>();
+
+public RemoteLogMetadataCache() {
+}
+
+private void addRemoteLogSegmentMetadata(RemoteLogSegmentMetadata 
remoteLogSegmentMetadata) {
+log.debug("Adding remote log segment metadata: [{}]", 
remoteLogSegmentMetadata);

Review comment:
   Is it useful to add a check against it?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] kowshik commented on a change in pull request #10218: KAFKA-12368: Added inmemory implementations for RemoteStorageManager and RemoteLogMetadataManager.

2021-03-10 Thread GitBox


kowshik commented on a change in pull request #10218:
URL: https://github.com/apache/kafka/pull/10218#discussion_r591905906



##
File path: 
remote-storage/src/main/java/org/apache/kafka/server/log/remote/storage/RemoteLogMetadataCache.java
##
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.storage;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.stream.Collectors;
+
+/**
+ * This class provides an inmemory cache of remote log segment metadata. This 
maintains the lineage of segments
+ * with respect to epoch evolution. It also keeps track of segments which are 
not considered to be copied to remote
+ * storage.
+ */
+public class RemoteLogMetadataCache {
+private static final Logger log = 
LoggerFactory.getLogger(RemoteLogMetadataCache.class);
+
+private final ConcurrentMap 
idToSegmentMetadata
+= new ConcurrentHashMap<>();
+
+private final Set remoteLogSegmentIdInProgress = new 
HashSet<>();
+
+private final ConcurrentMap> leaderEpochToOffsetToId
+= new ConcurrentHashMap<>();
+
+public RemoteLogMetadataCache() {

Review comment:
   This c'tor can be removed in exchange for the default generated c'tor.

##
File path: 
remote-storage/src/main/java/org/apache/kafka/server/log/remote/storage/RemoteLogMetadataCache.java
##
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.server.log.remote.storage;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Optional;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.stream.Collectors;
+
+/**
+ * This class provides an inmemory cache of remote log segment metadata. This 
maintains the lineage of segments
+ * with respect to epoch evolution. It also keeps track of segments which are 
not considered to be copied to remote
+ * storage.
+ */
+public class RemoteLogMetadataCache {
+private static final Logger log = 
LoggerFactory.getLogger(RemoteLogMetadataCache.class);
+
+private final ConcurrentMap 
idToSegmentMetadata
+= new ConcurrentHashMap<>();
+
+private final Set remoteLogSegmentIdInProgress = new 
HashSet<>();
+
+private final ConcurrentMap> leaderEpochToOffsetToId

Review comment:
   Looking at the implementation, it appears we maintain some rules on when 
a `RemoteLogSegmentId` exists in one of these data structures versus all of 
them. It would be useful to briefly document those rules, and mention 
invariants (if any). For example, when an upload is in progress it is not (yet) 
added to this map.

##
File path: