[jira] [Commented] (HDFS-16102) Remove redundant iteration in BlockManager#removeBlocksAssociatedTo(...) to save time

2021-06-30 Thread lei w (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17372307#comment-17372307
 ] 

lei w commented on HDFS-16102:
--

Thanks [~hexiaoqiao] for your reply.  I will update it.

> Remove redundant iteration in BlockManager#removeBlocksAssociatedTo(...) to 
> save time 
> --
>
> Key: HDFS-16102
> URL: https://issues.apache.org/jira/browse/HDFS-16102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16102.001.patch
>
>
> The current logic in removeBlocksAssociatedTo(...) is as follows:
> {code:java}
>   void removeBlocksAssociatedTo(final DatanodeDescriptor node) {
> providedStorageMap.removeDatanode(node);
> for (DatanodeStorageInfo storage : node.getStorageInfos()) {
>   final Iterator it = storage.getBlockIterator();
>   //add the BlockInfos to a new collection as the
>   //returned iterator is not modifiable.
>   Collection toRemove = new ArrayList<>();
>   while (it.hasNext()) {
> toRemove.add(it.next()); // First iteration : to put blocks to 
> another collection 
>   }
>   for (BlockInfo b : toRemove) {
> removeStoredBlock(b, node); // Another iteration : to remove blocks
>   }
> }
>   // ..
>   }
> {code}
>  In fact , we can use the first iteration to achieve this logic , so should 
> we remove the redundant iteration to save time and memory?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16102) Remove redundant iteration in BlockManager#removeBlocksAssociatedTo(...) to save time

2021-06-30 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17372114#comment-17372114
 ] 

Xiaoqiao He commented on HDFS-16102:


Thanks [~lei w] for your report, It seems that your codebase is not the latest 
one nor branch trunk. It has been updated for trunk. FYI. Thanks.

> Remove redundant iteration in BlockManager#removeBlocksAssociatedTo(...) to 
> save time 
> --
>
> Key: HDFS-16102
> URL: https://issues.apache.org/jira/browse/HDFS-16102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16102.001.patch
>
>
> The current logic in removeBlocksAssociatedTo(...) is as follows:
> {code:java}
>   void removeBlocksAssociatedTo(final DatanodeDescriptor node) {
> providedStorageMap.removeDatanode(node);
> for (DatanodeStorageInfo storage : node.getStorageInfos()) {
>   final Iterator it = storage.getBlockIterator();
>   //add the BlockInfos to a new collection as the
>   //returned iterator is not modifiable.
>   Collection toRemove = new ArrayList<>();
>   while (it.hasNext()) {
> toRemove.add(it.next()); // First iteration : to put blocks to 
> another collection 
>   }
>   for (BlockInfo b : toRemove) {
> removeStoredBlock(b, node); // Another iteration : to remove blocks
>   }
> }
>   // ..
>   }
> {code}
>  In fact , we can use the first iteration to achieve this logic , so should 
> we remove the redundant iteration to save time and memory?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org