vvcephei commented on a change in pull request #8987:
URL: https://github.com/apache/kafka/pull/8987#discussion_r450961919



##########
File path: 
streams/src/main/java/org/apache/kafka/streams/state/internals/AbstractRocksDBSegmentedBytesStore.java
##########
@@ -251,7 +252,7 @@ void restoreAllInternal(final Collection<KeyValue<byte[], 
byte[]>> records) {
                 // This handles the case that state store is moved to a new 
client and does not
                 // have the local RocksDB instance for the segment. In this 
case, toggleDBForBulkLoading
                 // will only close the database and open it again with bulk 
loading enabled.
-                if (!bulkLoadSegments.contains(segment)) {
+                if (!bulkLoadSegments.contains(segment) && context instanceof 
ProcessorContextImpl) {

Review comment:
       Woah, this is subtle. IIUC, the fix works by asserting that we should 
only enable bulk loading if the provided context is a ProcessorContextImpl, 
which is the kind of context that is only provided when adding the store to an 
active task.
   
   This seems correct to me, and although it's very subtle, it also seems ok as 
a patch for an older codebase that won't need to be maintained much. But maybe 
we can have a comment, or an internal method for the check, like 
   
   ```suggestion
                   if (!bulkLoadSegments.contains(segment) && 
isStoreForActiveTask(context)) {
   ```
   
   so that it'll be more obvious what's going on here?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to