sumitagrawl commented on code in PR #8203:
URL: https://github.com/apache/ozone/pull/8203#discussion_r2092567719
##########
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/RDBStoreAbstractIterator.java:
##########
@@ -30,30 +32,34 @@
* @param <RAW> the raw type.
*/
abstract class RDBStoreAbstractIterator<RAW>
- implements TableIterator<RAW, Table.KeyValue<RAW, RAW>> {
+ implements TableIterator<RAW,
RDBStoreAbstractIterator.AutoCloseableRawKeyValue<RAW>> {
private static final Logger LOG =
LoggerFactory.getLogger(RDBStoreAbstractIterator.class);
private final ManagedRocksIterator rocksDBIterator;
private final RDBTable rocksDBTable;
- private Table.KeyValue<RAW, RAW> currentEntry;
+ private ReferenceCountedObject<RawKeyValue<RAW>> currentEntry;
+ private RawKeyValue<RAW> previousKeyValue;
// This is for schemas that use a fixed-length
// prefix for each key.
private final RAW prefix;
+ private Boolean hasNext;
Review Comment:
In case of multi-thread, we should use volatile or atomic variables.
##########
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/RDBStoreAbstractIterator.java:
##########
@@ -150,5 +181,21 @@ public final void removeFromDB() throws IOException {
@Override
public void close() {
rocksDBIterator.close();
+ closed = true;
+ releaseEntry();
+ }
+
+ public static final class AutoCloseableRawKeyValue<RAW> extends
RawKeyValue<RAW> implements AutoCloseable {
Review Comment:
Can rename class as CloseableRawKeyValue, can remove Auto keyword
##########
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/RDBStoreAbstractIterator.java:
##########
@@ -78,32 +84,52 @@ final RAW getPrefix() {
@Override
public final void forEachRemaining(
- Consumer<? super Table.KeyValue<RAW, RAW>> action) {
+ Consumer<? super AutoCloseableRawKeyValue<RAW>> action) {
while (hasNext()) {
- action.accept(next());
+ AutoCloseableRawKeyValue<RAW> entry = next();
+ action.accept(entry);
+ }
+ }
+
+ private void releaseEntry() {
+ if (currentEntry != null) {
+ currentEntry.release();
}
+ currentEntry = null;
+ hasNext = null;
}
private void setCurrentEntry() {
- if (rocksDBIterator.get().isValid()) {
+ boolean isValid = !closed && rocksDBIterator.get().isValid();
+ if (isValid) {
currentEntry = getKeyValue();
+ currentEntry.retain();
} else {
currentEntry = null;
}
+ setHasNext(isValid, currentEntry);
+ }
+
+ public void setHasNext(boolean isValid,
ReferenceCountedObject<RawKeyValue<RAW>> entry) {
+ this.hasNext = isValid && (prefix == null ||
startsWithPrefix(entry.get().getKey()));
}
@Override
public final boolean hasNext() {
- return rocksDBIterator.get().isValid() &&
- (prefix == null || startsWithPrefix(key()));
+ if (hasNext == null) {
+ setCurrentEntry();
+ }
+ return hasNext;
}
@Override
- public final Table.KeyValue<RAW, RAW> next() {
- setCurrentEntry();
- if (currentEntry != null) {
+ public final AutoCloseableRawKeyValue<RAW> next() {
+ if (hasNext()) {
+ AutoCloseableRawKeyValue<RAW> entry = new
AutoCloseableRawKeyValue<>(currentEntry);
+ this.previousKeyValue = currentEntry.get();
rocksDBIterator.get().next();
Review Comment:
rocksDbIterator.get().next() can also be moved to setCurrentEntry(), as we
are getting next entry, updating hasNext flag together.
##########
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/RDBStoreAbstractIterator.java:
##########
@@ -125,23 +151,28 @@ public final void seekToLast() {
} else {
throw new UnsupportedOperationException("seekToLast: prefix != null");
}
- setCurrentEntry();
+ releaseEntry();
}
@Override
- public final Table.KeyValue<RAW, RAW> seek(RAW key) {
+ public final AutoCloseableRawKeyValue<RAW> seek(RAW key) {
seek0(key);
+ releaseEntry();
setCurrentEntry();
- return currentEntry;
+ // Current entry should be only closed when the next() and thus closing
the returned entry should be a noop.
+ if (hasNext()) {
+ return new AutoCloseableRawKeyValue<>(currentEntry);
+ }
+ return null;
}
@Override
public final void removeFromDB() throws IOException {
if (rocksDBTable == null) {
throw new UnsupportedOperationException("remove");
}
- if (currentEntry != null) {
- delete(currentEntry.getKey());
+ if (previousKeyValue != null) {
+ delete(previousKeyValue.getKey());
Review Comment:
since being reference counted, if after release, this is called can have
issue. Need keep reference Counted only, and throw exception if its closed
##########
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/Table.java:
##########
@@ -420,4 +420,8 @@ public int hashCode() {
interface KeyValueIterator<KEY, VALUE>
extends TableIterator<KEY, KeyValue<KEY, VALUE>> {
}
+
+ /** A {@link org.apache.hadoop.hdds.utils.db.TableSpliterator} to split
iterate {@link KeyValue}s. */
+ interface KeyValueSpliterator<KEY, VALUE> extends TableSpliterator<KEY,
KeyValue<KEY, VALUE>> {
Review Comment:
Do we need TableSpliterator defined?
##########
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/RDBStoreAbstractIterator.java:
##########
@@ -150,5 +181,21 @@ public final void removeFromDB() throws IOException {
@Override
public void close() {
rocksDBIterator.close();
+ closed = true;
+ releaseEntry();
+ }
+
+ public static final class AutoCloseableRawKeyValue<RAW> extends
RawKeyValue<RAW> implements AutoCloseable {
+ private final UncheckedAutoCloseableSupplier<RawKeyValue<RAW>> keyValue;
+
+ private AutoCloseableRawKeyValue(ReferenceCountedObject<RawKeyValue<RAW>>
kv) {
Review Comment:
This itself can be RefrenceCounted AutoClosable class, instead of wrapping
at caller.
- retain() --> add reference count
- release()/close() --> release reference count and when "0", close the
object.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]