[ 
https://issues.apache.org/jira/browse/HDDS-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554777#comment-16554777
 ] 

Tsz Wo Nicholas Sze commented on HDDS-288:
------------------------------------------

> There is a memory leak in removeContainer(..) – it sets the entry to null 
> instead of removing it.

The above statement is incorrect since computeIfPresent(..) does remove the 
entry if the new value is null.  However, why not simply call remove(..)?

> Fix bugs in OpenContainerBlockMap
> ---------------------------------
>
>                 Key: HDDS-288
>                 URL: https://issues.apache.org/jira/browse/HDDS-288
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Tsz Wo Nicholas Sze
>            Priority: Major
>
> - OpenContainerBlockMap should not be synchronized for a better performance. 
> - There is a memory leak in removeContainer(..) -- it sets the entry to null 
> instead of removing it.
> - addChunkToMap may add the same chunk twice.  See the comments below.
> {code}
>       keyDataSet.putIfAbsent(blockID.getLocalID(), getKeyData(info, 
> blockID)); // (1) when id is absent, it puts
>       keyDataSet.computeIfPresent(blockID.getLocalID(), (key, value) -> { // 
> (2) now, the id is present, it adds again.
>         value.addChunk(info);
>         return value;
>       });
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to