[ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15289915#comment-15289915
 ] 

Xiaowei Zhu commented on HDFS-10188:
------------------------------------

What does sizeof(clazz) return? Will it always be the size of memory pointed by 
ptr? 

> libhdfs++: Implement debug allocators
> -------------------------------------
>
>                 Key: HDFS-10188
>                 URL: https://issues.apache.org/jira/browse/HDFS-10188
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: James Clampffer
>            Assignee: Xiaowei Zhu
>         Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch, 
> HDFS-10188.HDFS-8707.003.patch, HDFS-10188.HDFS-8707.004.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to