[
https://issues.apache.org/jira/browse/SAMZA-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14137471#comment-14137471
]
Chris Riccomini commented on SAMZA-414:
---------------------------------------
I took a look at this.
Your results (using tmp) do result in the log line you've pasted. The problem
is that the path that the hprof gets dumped to is immediately deleted by YARN.
{noformat}
Dumping heap to
/tmp/hadoop-criccomi/nm-local-dir/usercache/criccomi/appcache/application_1410969447594_0002/container_1410969447594_0002_01_000002/__package/tmp/java_pid25276.hprof
...
{noformat}
But:
{noformat}
$ ls
/tmp/hadoop-criccomi/nm-local-dir/usercache/criccomi/appcache/application_1410969447594_0002/container_1410969447594_0002_01_000002/__package/tmp/
ls:
/tmp/hadoop-criccomi/nm-local-dir/usercache/criccomi/appcache/application_1410969447594_0002/container_1410969447594_0002_01_000002/__package/tmp/:
No such file or directory
{noformat}
You can make YARN keep the hprof dump around for a bit, using
yarn.nodemanager.delete.debug-delay-sec config, but this is usually disabled by
default.
I wonder if there's somewhere else that we could persist it.
> Enable HeapDumpOnOutOfMemoryError by default
> --------------------------------------------
>
> Key: SAMZA-414
> URL: https://issues.apache.org/jira/browse/SAMZA-414
> Project: Samza
> Issue Type: Bug
> Components: container
> Affects Versions: 0.8.0
> Reporter: Chris Riccomini
> Assignee: Chris Riccomini
> Fix For: 0.8.0
>
> Attachments: SAMZA-414-0.patch, SAMZA-414-1.patch, log-hprof.png
>
>
> It would be nice if Samza's run-class.sh defaulted to use
> -XX:+HeapDumpOnOutOfMemoryError. According to
> [this|http://stackoverflow.com/questions/542979/using-heapdumponoutofmemoryerror-parameter-for-heap-dump-for-jboss]
> post, it puts the heap dump in CWD by default, which should be fine.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)