hello

when I use kylin to run  sample_cube,here is the problem I got "beyond the 
'VIRTUAL' memory limit",but if I change the parameters in yarn-site.xml 
like this to avoid this problem ,but it didn't workout ,how to solve this 
problem?
 

  <property>
      <name>yarn.nodemanager.vmem-check-enabled</name>
      <value>false</value>
  </property>

  <property>
      <name>yarn.nodemanager.vmem-pmem-ratio</name>
      <value>5</value>
  </property>



Failure task Diagnostics:
[2020-08-18 09:36:31.914]Container 
[pid=9728,containerID=container_e39_1595925248234_0089_01_000004] is 
running 533010944B beyond the 'VIRTUAL' memory limit. Current usage: 239.2 
MB of 1 GB physical memory used; 2.6 GB of 2.1 GB virtual memory used. 
Killing container.
Dump of the process-tree for container_e39_1595925248234_0089_01_000004 :
                 |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
                 |- 9741 9728 9728 9728 (java) 722 27 2672025600 60940 
/opt/bigdata/jdk/bin/java -Djava.net.preferIPv4Stack=true 
-Dhadoop.metrics.log.level=WARN -Djava.net.preferIPv4Stack=true -Xmx800m 
-Djava.io.tmpdir=/var/hadoop/tmp/dfs/tmp/nm-local-dir/usercache/root/appcache/application_1595925248234_0089/container_e39_1595925248234_0089_01_000004/tmp
 
-Dlog4j.configuration=container-log4j.properties 
-Dyarn.app.container.log.dir=/opt/bigdata/hadoop/logs/userlogs/application_1595925248234_0089/container_e39_1595925248234_0089_01_000004
 
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
-Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 
83.249.174.185 37044 attempt_1595925248234_0089_m_000000_2 42880953483268 
                 |- 9728 9726 9728 9728 (bash) 0 0 115843072 302 /bin/bash 
-c /opt/bigdata/jdk/bin/java -Djava.net.preferIPv4Stack=true 
-Dhadoop.metrics.log.level=WARN  -Djava.net.preferIPv4Stack=true -Xmx800m 
-Djava.io.tmpdir=/var/hadoop/tmp/dfs/tmp/nm-local-dir/usercache/root/appcache/application_1595925248234_0089/container_e39_1595925248234_0089_01_000004/tmp
 
-Dlog4j.configuration=container-log4j.properties 
-Dyarn.app.container.log.dir=/opt/bigdata/hadoop/logs/userlogs/application_1595925248234_0089/container_e39_1595925248234_0089_01_000004
 
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
-Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 
83.249.174.185 37044 attempt_1595925248234_0089_m_000000_2 42880953483268 
1>/opt/bigdata/hadoop/logs/userlogs/application_1595925248234_0089/container_e39_1595925248234_0089_01_000004/stdout
 
2>/opt/bigdata/hadoop/logs/userlogs/application_1595925248234_0089/container_e39_1595925248234_0089_01_000004/stderr
 
 

[2020-08-18 09:36:31.927]Container killed on request. Exit code is 143
[2020-08-18 09:36:31.938]Container exited with a non-zero exit code 143. 


                 at 
org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:223)
                 at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:178)
                 at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71)
                 at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:178)
                 at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
                 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
                 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
                 at java.lang.Thread.run(Thread.java:745)

Reply via email to