[ 
https://issues.apache.org/jira/browse/HADOOP-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17181077#comment-17181077
 ] 

Sean Chow edited comment on HADOOP-17209 at 8/20/20, 9:49 AM:
--------------------------------------------------------------

Hi [~sodonnell] , the patch has 3 {{ReleaseIntArrayElements}}.:P

Yeah, I I've replaced with the recompiled native library in five datanodes from 
my production. All of them works fine when getting and putting files, and the 
memory used seems very promising.


was (Author: seanlook):
Hi [~sodonnell] , the patch has 3 {{ReleaseIntArrayElements}}.:P

 

Yeah, I I've replace with the recompiled native library in five datanode from 
my production. All of them works fine when getting and putting files, and the 
memory used seems very promising.

> Erasure Coding: Native library memory leak
> ------------------------------------------
>
>                 Key: HADOOP-17209
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17209
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: native
>    Affects Versions: 3.3.0, 3.2.1, 3.1.3
>            Reporter: Sean Chow
>            Assignee: Sean Chow
>            Priority: Major
>         Attachments: HADOOP-17209.001.patch, 
> datanode.202137.detail_diff.5.txt, image-2020-08-15-18-26-44-744.png, 
> image-2020-08-20-12-35-39-906.png
>
>
> We use both {{apache-hadoop-3.1.3}} and {{CDH-6.1.1-1.cdh6.1.1.p0.875250}} 
> HDFS in production, and both of them have the memory increasing over {{-Xmx}} 
> value. 
> !image-2020-08-15-18-26-44-744.png!
>  
> We use EC strategy to to save storage costs.
> This's the jvm options:
> {code:java}
> -Dproc_datanode -Dhdfs.audit.logger=INFO,RFAAUDIT 
> -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true 
> -Xms8589934592 -Xmx8589934592 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC 
> -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled 
> -XX:+HeapDumpOnOutOfMemoryError ...{code}
> The max jvm heapsize is 8GB, but we can see the datanode RSS memory is 48g. 
> All the other datanodes in this hdfs cluster has the same issue.
> {code:java}
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
> 226044 hdfs 20 0 50.6g 48g 4780 S 90.5 77.0 14728:27 
> /usr/java/jdk1.8.0_162/bin/java -Dproc_datanode{code}
>  
> This too much memory used leads to my machine unresponsive(if enable swap), 
> or oom-killer happens.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to