clintropolis commented on code in PR #18844:
URL: https://github.com/apache/druid/pull/18844#discussion_r2696337787


##########
processing/src/main/java/org/apache/druid/guice/DruidSecondaryModule.java:
##########
@@ -164,6 +168,22 @@ public WireTransferableContext getWireTransferableContext(
     return new WireTransferableContext(smileMapper, concreteDeserializer, 
isUseLegacyFrameSerialization());
   }
 
+  @Provides
+  @LazySingleton
+  @Deterministic
+  public ObjectMapper getSortedMapper(
+      Injector injector,
+      Map<ByteBuffer, WireTransferable.Deserializer> wtDeserializers
+  )
+  {
+    final ObjectMapper sortedMapper = new DefaultObjectMapper();

Review Comment:
   is this cool? as in, like does it matter that this will this be missing all 
of the jackson modules that get registered with the normal jsonMapper?
   
   also, it seems like we inject it places so that we can make a 
`DefaultIndexingStateFingerprintMapper`, should this just be an internal 
implementation detail of `DefaultIndexingStateFingerprintMapper`? I would 
imagine in the future we would want to just get the fingerprint mapper from 
like the supervisor? (if it is configurable per datasource) or some fingerpint 
factory or something (if system wide) instead of this special object mapper 
used for the default impl in the future once this is made more pluggable unless 
i'm missing something



##########
server/src/main/java/org/apache/druid/metadata/SQLMetadataConnector.java:
##########
@@ -355,6 +366,8 @@ public void createSegmentTable(final String tableName)
     columns.add("used BOOLEAN NOT NULL");
     columns.add("payload %2$s NOT NULL");
     columns.add("used_status_last_updated VARCHAR(255) NOT NULL");
+    columns.add("indexing_state_fingerprint VARCHAR(255)");
+    columns.add("upgraded_from_segment_id VARCHAR(255)");
 
     if (centralizedDatasourceSchemaConfig.isEnabled()) {

Review Comment:
   i know this isn't new or yours, but it feels weird that we conditionally 
define the schema based on some config.... 



##########
server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java:
##########
@@ -462,6 +466,13 @@ public SegmentPublishResult commitSegmentsAndMetadata(
             );
           }
       );
+
+      // Mark compaction state fingerprints as active after successful publish
+      if (result.isSuccess()) {
+        markIndexingStateFingerprintsAsActive(result.getSegments());
+      }

Review Comment:
   should this be done as part of the same transaction that does the other 
stuff? same question for other similar calls in this file. i guess it probably 
doesn't matter much in practice....



##########
server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java:
##########
@@ -2685,6 +2713,43 @@ public Map<String, Set<String>> 
retrieveUpgradedToSegmentIds(
     return upgradedToSegmentIds;
   }
 
+  /**
+   * Marks indexing state fingerprints as active (non-pending) for 
successfully published segments.
+   * <p>
+   * Extracts unique indexing state fingerprints from the given segments and 
marks them as active
+   * in the inexing state storage. This is called after successful segment 
publishing to indicate
+   * that the indexing state is no longer pending and can be retained with the 
regular grace period.
+   *
+   * @param segments The segments that were successfully published
+   */
+  private void markIndexingStateFingerprintsAsActive(Set<DataSegment> segments)
+  {
+    if (segments == null || segments.isEmpty()) {
+      return;
+    }
+
+    // Collect unique non-null indexing state fingerprints
+    final Set<String> fingerprints = segments.stream()
+                                             
.map(DataSegment::getIndexingStateFingerprint)
+                                             .filter(fp -> fp != null && 
!fp.isEmpty())
+                                             .collect(Collectors.toSet());
+
+    // Mark each fingerprint as active
+    for (String fingerprint : fingerprints) {
+      try {
+        int rowsUpdated = 
indexingStateStorage.markIndexingStatesAsActive(fingerprint);

Review Comment:
   any reason not to set these all active in a single call?



##########
indexing-service/src/main/java/org/apache/druid/indexing/overlord/duty/KillUnreferencedIndexingState.java:
##########
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.indexing.overlord.duty;
+
+import org.apache.druid.indexing.overlord.config.OverlordMetadataCleanupConfig;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.segment.metadata.IndexingStateStorage;
+import org.joda.time.DateTime;
+
+import javax.inject.Inject;
+import java.util.List;
+
+/**
+ * Duty that cleans up unreferenced indexing states from the indexing state 
storage.
+ * <p>
+ * The cleanup process involves:
+ * <ol>
+ *   <li>Marking unreferenced indexing states as unused.</li>
+ *   <li>Repairing any unused states that are still referenced by 
segments.</li>
+ *   <li>Deleting unused indexing states older than the configured retention 
duration.</li>
+ *   <li>Deleting any pending indexing states that are older than the 
configured retention duration.</li>
+ * </ol>
+ */
+public class KillUnreferencedIndexingState extends OverlordMetadataCleanupDuty
+{
+  private static final Logger log = new 
Logger(KillUnreferencedIndexingState.class);
+  private final IndexingStateStorage indexingStateStorage;
+
+  @Inject
+  public KillUnreferencedIndexingState(
+      OverlordMetadataCleanupConfig config,
+      IndexingStateStorage indexingStateStorage
+  )
+  {
+    super("indexingStates", config);
+    this.indexingStateStorage = indexingStateStorage;
+  }
+
+  @Override
+  protected int cleanupEntriesCreatedBeforeDurationToRetain(DateTime 
minCreatedTime)
+  {
+    // 1: Mark unreferenced states as unused

Review Comment:
   current comments seem sort of redundant with the code, but it does seem like 
it would be useful to instead summarize why we are doing this other stuff 
before deleting unused 



##########
indexing-service/src/main/java/org/apache/druid/indexing/overlord/config/OverlordMetadataCleanupConfig.java:
##########
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.indexing.overlord.config;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.apache.druid.common.config.Configs;
+import org.joda.time.Duration;
+
+import java.util.Objects;
+
+public class OverlordMetadataCleanupConfig

Review Comment:
   i guess this is `MetadataCleanupConfig` with extra pending duration? maybe 
worth javadoc to explain when you might want to use one or the other? (i think 
this one is for when track pending state in same table as the thing itself?) 
Also having trouble what is specific to the overlord about this re: naming



##########
indexing-service/src/main/java/org/apache/druid/indexing/overlord/duty/OverlordMetadataCleanupDuty.java:
##########
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.indexing.overlord.duty;
+
+import org.apache.druid.indexing.overlord.config.OverlordMetadataCleanupConfig;
+import org.apache.druid.java.util.common.DateTimes;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.joda.time.DateTime;
+
+/**
+ * Performs cleanup of stale metadata entries created before a configured 
retain duration.
+ * <p>
+ * In every invocation of {@link #run}, the duty checks if the {@code 
cleanupPeriod}
+ * has elapsed since the {@link #lastCleanupTime}. If it has, then the method
+ * {@link #cleanupEntriesCreatedBeforeDurationToRetain(DateTime)} is invoked. 
Otherwise, the duty
+ * completes immediately without making any changes.
+ */
+public abstract class OverlordMetadataCleanupDuty implements OverlordDuty
+{
+  private static final Logger log = new 
Logger(OverlordMetadataCleanupDuty.class);
+
+  private final String entryType;
+  private final OverlordMetadataCleanupConfig cleanupConfig;
+
+  private DateTime lastCleanupTime = DateTimes.utc(0);
+
+  protected OverlordMetadataCleanupDuty(String entryType, 
OverlordMetadataCleanupConfig cleanupConfig)
+  {
+    this.entryType = entryType;
+    this.cleanupConfig = cleanupConfig;
+
+    if (cleanupConfig.isCleanupEnabled()) {
+      log.debug(
+          "Enabled cleanup of [%s] with period [%s] and durationToRetain 
[%s].",
+          entryType, cleanupConfig.getCleanupPeriod(), 
cleanupConfig.getDurationToRetain()
+      );
+    }
+  }
+
+  @Override
+  public void run()
+  {
+    if (!cleanupConfig.isCleanupEnabled()) {
+      return;
+    }
+
+    final DateTime now = getCurrentTime();
+
+    // Perform cleanup only if cleanup period has elapsed
+    if (lastCleanupTime.plus(cleanupConfig.getCleanupPeriod()).isBefore(now)) {
+      lastCleanupTime = now;
+
+      try {
+        DateTime minCreatedTime = 
now.minus(cleanupConfig.getDurationToRetain());
+        int deletedEntries = 
cleanupEntriesCreatedBeforeDurationToRetain(minCreatedTime);
+        if (deletedEntries > 0) {
+          log.info("Removed [%,d] [%s] created before [%s].", deletedEntries, 
entryType, minCreatedTime);
+        }
+        DateTime pendingMinCreatedTime = 
now.minus(cleanupConfig.getPendingDurationToRetain());
+        int deletedPendingEntries = 
cleanupEntriesCreatedBeforePendingDurationToRetain(pendingMinCreatedTime);
+        if (deletedPendingEntries > 0) {
+          log.info("Removed [%,d] pending entries [%s] created before [%s].", 
deletedPendingEntries, entryType, pendingMinCreatedTime);
+        }
+      }
+      catch (Exception e) {
+        log.error(e, "Failed to perform cleanup of [%s]", entryType);
+      }
+    }
+  }
+
+  @Override
+  public boolean isEnabled()
+  {
+    return cleanupConfig.isCleanupEnabled();
+  }
+
+  @Override
+  public DutySchedule getSchedule()
+  {
+    if (isEnabled()) {
+      return new DutySchedule(cleanupConfig.getCleanupPeriod().getMillis(), 0);
+    } else {
+      return new DutySchedule(0, 0);
+    }
+  }
+
+  /**
+   * Cleans up metadata entries created before the {@code minCreatedTime} 
calculated with {@link OverlordMetadataCleanupConfig#durationToRetain}.
+   * <p>
+   * This method is not invoked if the {@code cleanupPeriod} has not elapsed 
since the {@link #lastCleanupTime}.
+   *
+   * @return Number of deleted metadata entries
+   */
+  protected abstract int cleanupEntriesCreatedBeforeDurationToRetain(DateTime 
minCreatedTime);
+
+  /**
+   * Cleans up pending metadata entries created before the {@code 
minCreatedTime} calculated with {@link 
OverlordMetadataCleanupConfig#pendingDurationToRetain}.
+   * <p>
+   * This method is not invoked if the {@code cleanupPeriod} has not elapsed 
since the {@link #lastCleanupTime}.
+   *
+   * @return Number of deleted pending metadata entries
+   */
+  protected abstract int 
cleanupEntriesCreatedBeforePendingDurationToRetain(DateTime minCreatedTime);
+
+  protected DateTime getCurrentTime()

Review Comment:
   this is pretty wierd and maybe should have javadocs to indicate that it is 
to make testing controllable



##########
server/src/main/java/org/apache/druid/metadata/SQLMetadataConnector.java:
##########
@@ -1243,12 +1300,12 @@ private void validateSegmentsTable()
         (tableHasColumn(segmentsTables, "schema_fingerprint")
          && tableHasColumn(segmentsTables, "num_rows"));
 
-    if (tableHasColumn(segmentsTables, "used_status_last_updated") && 
schemaPersistenceRequirementMet) {
+    if (tableHasColumn(segmentsTables, "used_status_last_updated") && 
schemaPersistenceRequirementMet && tableHasColumn(segmentsTables, 
"indexing_state_fingerprint")) {
       // do nothing
     } else {
       throw new ISE(
           "Cannot start Druid as table[%s] has an incompatible schema."
-          + " Reason: One or all of these columns [used_status_last_updated, 
schema_fingerprint, num_rows] does not exist in table."
+          + " Reason: One or all of these columns [used_status_last_updated, 
schema_fingerprint, num_rows, indexing_state_fingerprint] does not exist in 
table."

Review Comment:
   nit: might be about time to break this down and have separate messages per 
problem



##########
indexing-service/src/main/java/org/apache/druid/indexing/overlord/duty/OverlordMetadataCleanupDuty.java:
##########
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.indexing.overlord.duty;
+
+import org.apache.druid.indexing.overlord.config.OverlordMetadataCleanupConfig;
+import org.apache.druid.java.util.common.DateTimes;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.joda.time.DateTime;
+
+/**
+ * Performs cleanup of stale metadata entries created before a configured 
retain duration.
+ * <p>
+ * In every invocation of {@link #run}, the duty checks if the {@code 
cleanupPeriod}
+ * has elapsed since the {@link #lastCleanupTime}. If it has, then the method
+ * {@link #cleanupEntriesCreatedBeforeDurationToRetain(DateTime)} is invoked. 
Otherwise, the duty
+ * completes immediately without making any changes.
+ */
+public abstract class OverlordMetadataCleanupDuty implements OverlordDuty
+{
+  private static final Logger log = new 
Logger(OverlordMetadataCleanupDuty.class);
+
+  private final String entryType;
+  private final OverlordMetadataCleanupConfig cleanupConfig;
+
+  private DateTime lastCleanupTime = DateTimes.utc(0);
+
+  protected OverlordMetadataCleanupDuty(String entryType, 
OverlordMetadataCleanupConfig cleanupConfig)
+  {
+    this.entryType = entryType;
+    this.cleanupConfig = cleanupConfig;
+
+    if (cleanupConfig.isCleanupEnabled()) {
+      log.debug(
+          "Enabled cleanup of [%s] with period [%s] and durationToRetain 
[%s].",
+          entryType, cleanupConfig.getCleanupPeriod(), 
cleanupConfig.getDurationToRetain()
+      );
+    }
+  }
+
+  @Override
+  public void run()
+  {
+    if (!cleanupConfig.isCleanupEnabled()) {
+      return;
+    }
+
+    final DateTime now = getCurrentTime();
+
+    // Perform cleanup only if cleanup period has elapsed
+    if (lastCleanupTime.plus(cleanupConfig.getCleanupPeriod()).isBefore(now)) {
+      lastCleanupTime = now;
+
+      try {
+        DateTime minCreatedTime = 
now.minus(cleanupConfig.getDurationToRetain());
+        int deletedEntries = 
cleanupEntriesCreatedBeforeDurationToRetain(minCreatedTime);
+        if (deletedEntries > 0) {
+          log.info("Removed [%,d] [%s] created before [%s].", deletedEntries, 
entryType, minCreatedTime);
+        }
+        DateTime pendingMinCreatedTime = 
now.minus(cleanupConfig.getPendingDurationToRetain());
+        int deletedPendingEntries = 
cleanupEntriesCreatedBeforePendingDurationToRetain(pendingMinCreatedTime);
+        if (deletedPendingEntries > 0) {
+          log.info("Removed [%,d] pending entries [%s] created before [%s].", 
deletedPendingEntries, entryType, pendingMinCreatedTime);
+        }
+      }
+      catch (Exception e) {
+        log.error(e, "Failed to perform cleanup of [%s]", entryType);
+      }
+    }
+  }
+
+  @Override
+  public boolean isEnabled()
+  {
+    return cleanupConfig.isCleanupEnabled();
+  }
+
+  @Override
+  public DutySchedule getSchedule()
+  {
+    if (isEnabled()) {
+      return new DutySchedule(cleanupConfig.getCleanupPeriod().getMillis(), 0);
+    } else {
+      return new DutySchedule(0, 0);
+    }
+  }
+
+  /**
+   * Cleans up metadata entries created before the {@code minCreatedTime} 
calculated with {@link OverlordMetadataCleanupConfig#durationToRetain}.
+   * <p>
+   * This method is not invoked if the {@code cleanupPeriod} has not elapsed 
since the {@link #lastCleanupTime}.
+   *
+   * @return Number of deleted metadata entries
+   */
+  protected abstract int cleanupEntriesCreatedBeforeDurationToRetain(DateTime 
minCreatedTime);
+
+  /**
+   * Cleans up pending metadata entries created before the {@code 
minCreatedTime} calculated with {@link 
OverlordMetadataCleanupConfig#pendingDurationToRetain}.
+   * <p>
+   * This method is not invoked if the {@code cleanupPeriod} has not elapsed 
since the {@link #lastCleanupTime}.
+   *
+   * @return Number of deleted pending metadata entries
+   */
+  protected abstract int 
cleanupEntriesCreatedBeforePendingDurationToRetain(DateTime minCreatedTime);

Review Comment:
   same comment as ^, `cleanupPendingEntriesCreatedBefore` or 
`cleanupPendingEntriesOlderThan`



##########
server/src/main/java/org/apache/druid/metadata/SQLMetadataConnector.java:
##########
@@ -229,6 +229,17 @@ protected boolean 
isRootCausePacketTooBigException(Throwable t)
     return false;
   }
 
+  /**
+   *  Checks if the root cause of the given exception is a unique constraint 
violation.
+   *
+   * @return false by default. Specific implementations should override this 
method
+   * to correctly classify their unique constraint violation exceptions.
+   */
+  public boolean isUniqueConstraintViolation(Throwable t)
+  {
+    return false;
+  }

Review Comment:
   should there be a default implementation for this? based on the usage, it 
looks like not implementing it means something will explodes instead of eating 
a (possibly expected?) exception



##########
indexing-service/src/main/java/org/apache/druid/indexing/overlord/duty/OverlordMetadataCleanupDuty.java:
##########
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.indexing.overlord.duty;
+
+import org.apache.druid.indexing.overlord.config.OverlordMetadataCleanupConfig;
+import org.apache.druid.java.util.common.DateTimes;
+import org.apache.druid.java.util.common.logger.Logger;
+import org.joda.time.DateTime;
+
+/**
+ * Performs cleanup of stale metadata entries created before a configured 
retain duration.
+ * <p>
+ * In every invocation of {@link #run}, the duty checks if the {@code 
cleanupPeriod}
+ * has elapsed since the {@link #lastCleanupTime}. If it has, then the method
+ * {@link #cleanupEntriesCreatedBeforeDurationToRetain(DateTime)} is invoked. 
Otherwise, the duty
+ * completes immediately without making any changes.
+ */
+public abstract class OverlordMetadataCleanupDuty implements OverlordDuty
+{
+  private static final Logger log = new 
Logger(OverlordMetadataCleanupDuty.class);
+
+  private final String entryType;
+  private final OverlordMetadataCleanupConfig cleanupConfig;
+
+  private DateTime lastCleanupTime = DateTimes.utc(0);
+
+  protected OverlordMetadataCleanupDuty(String entryType, 
OverlordMetadataCleanupConfig cleanupConfig)
+  {
+    this.entryType = entryType;
+    this.cleanupConfig = cleanupConfig;
+
+    if (cleanupConfig.isCleanupEnabled()) {
+      log.debug(
+          "Enabled cleanup of [%s] with period [%s] and durationToRetain 
[%s].",
+          entryType, cleanupConfig.getCleanupPeriod(), 
cleanupConfig.getDurationToRetain()
+      );
+    }
+  }
+
+  @Override
+  public void run()
+  {
+    if (!cleanupConfig.isCleanupEnabled()) {
+      return;
+    }
+
+    final DateTime now = getCurrentTime();
+
+    // Perform cleanup only if cleanup period has elapsed
+    if (lastCleanupTime.plus(cleanupConfig.getCleanupPeriod()).isBefore(now)) {
+      lastCleanupTime = now;
+
+      try {
+        DateTime minCreatedTime = 
now.minus(cleanupConfig.getDurationToRetain());
+        int deletedEntries = 
cleanupEntriesCreatedBeforeDurationToRetain(minCreatedTime);
+        if (deletedEntries > 0) {
+          log.info("Removed [%,d] [%s] created before [%s].", deletedEntries, 
entryType, minCreatedTime);
+        }
+        DateTime pendingMinCreatedTime = 
now.minus(cleanupConfig.getPendingDurationToRetain());
+        int deletedPendingEntries = 
cleanupEntriesCreatedBeforePendingDurationToRetain(pendingMinCreatedTime);
+        if (deletedPendingEntries > 0) {
+          log.info("Removed [%,d] pending entries [%s] created before [%s].", 
deletedPendingEntries, entryType, pendingMinCreatedTime);
+        }
+      }
+      catch (Exception e) {
+        log.error(e, "Failed to perform cleanup of [%s]", entryType);
+      }
+    }
+  }
+
+  @Override
+  public boolean isEnabled()
+  {
+    return cleanupConfig.isCleanupEnabled();
+  }
+
+  @Override
+  public DutySchedule getSchedule()
+  {
+    if (isEnabled()) {
+      return new DutySchedule(cleanupConfig.getCleanupPeriod().getMillis(), 0);
+    } else {
+      return new DutySchedule(0, 0);
+    }
+  }
+
+  /**
+   * Cleans up metadata entries created before the {@code minCreatedTime} 
calculated with {@link OverlordMetadataCleanupConfig#durationToRetain}.
+   * <p>
+   * This method is not invoked if the {@code cleanupPeriod} has not elapsed 
since the {@link #lastCleanupTime}.
+   *
+   * @return Number of deleted metadata entries
+   */
+  protected abstract int cleanupEntriesCreatedBeforeDurationToRetain(DateTime 
minCreatedTime);

Review Comment:
   nit: seems like this could just be `cleanupEntriesCreatedBefore` or 
`cleanupEntriesOlderThan` to be more consistent with underlying impl method 
name, since it isn't passed the duration or anything



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to