[ https://issues.apache.org/jira/browse/HADOOP-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Chow updated HADOOP-17209: ------------------------------- Description: We use both {{apache-hadoop-3.1.3}} and {{CDH-6.1.1-1.cdh6.1.1.p0.875250}} HDFS in production, and both of them have the memory increasing over {{-Xmx}} value. !image-2020-08-15-18-26-44-744.png! This's the jvm options: {code:java} -Dproc_datanode -Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true -Xms8589934592 -Xmx8589934592 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+HeapDumpOnOutOfMemoryError ...{code} The max jvm heapsize is 8GB, but we can see the datanode RSS memory is 48g. {code:java} PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 226044 hdfs 20 0 50.6g 48g 4780 S 90.5 77.0 14728:27 /usr/java/jdk1.8.0_162/bin/java -Dproc_datanode{code} This too much memory used leads to my machine unresponsive(if enable swap), or oom-killer happens. was: We use both {{apache-hadoop-3.1.3}} and {{CDH-6.1.1-1.cdh6.1.1.p0.875250}} HDFS in production, and both of them have the memory increasing over {{-Xmx}} value. This's the jvm options: {code:java} -Dproc_datanode -Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true -Xms8589934592 -Xmx8589934592 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+HeapDumpOnOutOfMemoryError ...{code} The max jvm heapsize is 8GB, but we can see the datanode RSS memory is 48g. {code:java} PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 226044 hdfs 20 0 50.6g 48g 4780 S 90.5 77.0 14728:27 /usr/java/jdk1.8.0_162/bin/java -Dproc_datanode{code} !image-2020-08-15-17-50-48-598.png! This too much memory used leads to my machine unresponsive(if enable swap), or oom-killer happens. > ErasureCode native library memory leak > -------------------------------------- > > Key: HADOOP-17209 > URL: https://issues.apache.org/jira/browse/HADOOP-17209 > Project: Hadoop Common > Issue Type: Bug > Components: native > Affects Versions: 3.3.0, 3.2.1, 3.1.3 > Reporter: Sean Chow > Assignee: Sean Chow > Priority: Major > Attachments: image-2020-08-15-18-25-48-830.png, > image-2020-08-15-18-26-44-744.png > > > We use both {{apache-hadoop-3.1.3}} and {{CDH-6.1.1-1.cdh6.1.1.p0.875250}} > HDFS in production, and both of them have the memory increasing over {{-Xmx}} > value. > !image-2020-08-15-18-26-44-744.png! > This's the jvm options: > > {code:java} > -Dproc_datanode -Dhdfs.audit.logger=INFO,RFAAUDIT > -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true > -Xms8589934592 -Xmx8589934592 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC > -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled > -XX:+HeapDumpOnOutOfMemoryError ...{code} > > The max jvm heapsize is 8GB, but we can see the datanode RSS memory is 48g. > {code:java} > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 226044 hdfs 20 0 50.6g 48g 4780 S 90.5 77.0 14728:27 > /usr/java/jdk1.8.0_162/bin/java -Dproc_datanode{code} > > This too much memory used leads to my machine unresponsive(if enable swap), > or oom-killer happens. > -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org