[ https://issues.apache.org/jira/browse/HDDS-361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16640421#comment-16640421 ]
Hanisha Koneru edited comment on HDDS-361 at 10/5/18 10:37 PM: --------------------------------------------------------------- [~ljain], thanks for working on this. The patch looks very good. I just have a few minor comments. # In {{BlockDeletingService#executeDeleteTxn()}}, if we cannot delete a block, we skip that block and continue deleting other blocks. So the actual number of blocks deleted might be less than were scheduled in the transaction. In {{BlockDeletingService#call()}}, we update the {{numBlocksDeleted}} with the count of number of blocks scheduled for deletion. {code:java} if (delTxn != null) { executeDeleteTxn(delTxn, defaultStore); // increment number of blocks deleted for the container numBlocksDeleted += delTxn.getLocalIDCount(); // if successful, txn can be removed from delete table{code} Instead, we should update {{numBlocksDeleted}} with the number of blocks actually deleted in {{executeDeleteTxn}} {code:java} if (delTxn != null) { int deletedBlocksCount = executeDeleteTxn(delTxn, defaultStore); // increment number of blocks deleted for the container numBlocksDeleted += deletedBlocksCount; // if successful, txn can be removed from delete table{code} # Also, before deleting the transaction from the Pending Deletes tables, we should verify that all the blocks in the transaction were successfully deleted. {code:java} // if successful, txn can be removed from delete table if (deletedBlocksCount == delTxn.getLocalIDCount()) { batch.delete(pendingDeletes.getHandle(), Longs.toByteArray(delTxn.getTxID())); } {code} # In {{BlockDeletingService#executeDeleteTxn}}, the null check for delTxn is redundant. We perform this check before calling the function too. # A NIT: In {{BlockManagerImpl}}, most of the functions get the DB and then get the default table. We could have this in a private function to avoid redundancy. {code:java} private Table getDefaultTable(containerData, conf) { DBStore db = BlockUtils.getDB(cData, config); return db.getTable(DEFAULT_TABLE) } {code} P.S: The patch does not apply to trunk anymore. was (Author: hanishakoneru): [~ljain], thanks for working on this. The patch looks very good. I just have a few minor comments. # In {{BlockDeletingService#executeDeleteTxn()}}, if we cannot delete a block, we skip that block and continue deleting other blocks. So the actual number of blocks deleted might be less than were scheduled in the transaction. In {{BlockDeletingService#call()}}, we update the {{numBlocksDeleted}} with the count of number of blocks scheduled for deletion. {code:java} if (delTxn != null) { executeDeleteTxn(delTxn, defaultStore); // increment number of blocks deleted for the container numBlocksDeleted += delTxn.getLocalIDCount(); // if successful, txn can be removed from delete table{code} Instead, we should update {{numBlocksDeleted}} with the number of blocks actually deleted in {{executeDeleteTxn}} {code:java} if (delTxn != null) { int deletedBlocksCount = executeDeleteTxn(delTxn, defaultStore); // increment number of blocks deleted for the container numBlocksDeleted += deletedBlocksCount; // if successful, txn can be removed from delete table{code} # Also, before deleting the transaction from the Pending Deletes tables, we should verify that all the blocks in the transaction were successfully deleted. {code:java} // if successful, txn can be removed from delete table if (deletedBlocksCount == delTxn.getLocalIDCount()) { batch.delete(pendingDeletes.getHandle(), Longs.toByteArray(delTxn.getTxID())); } {code} # In {{BlockDeletingService#executeDeleteTxn}}, the null check for delTxn is redundant. We perform this check before calling the function too. # A NIT: In {{BlockManagerImpl}}, most of the functions get the DB and then get the default table. We could have this in a private function to avoid redundancy. {code:java} private Table getDefaultTable(containerData, conf) { DBStore db = BlockUtils.getDB(cData, config); return db.getTable(DEFAULT_TABLE) } {code} P.S: The patch does not apply to trunk anymore. > Use DBStore and TableStore for DN metadata > ------------------------------------------ > > Key: HDDS-361 > URL: https://issues.apache.org/jira/browse/HDDS-361 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Reporter: Xiaoyu Yao > Assignee: Lokesh Jain > Priority: Major > Attachments: HDDS-361.001.patch, HDDS-361.002.patch > > > As part of OM performance improvement we used Tables for storing a particular > type of key value pair in the rocks db. This Jira aims to use Tables for > separating block keys and deletion transactions in the container db. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org