[ https://issues.apache.org/jira/browse/SPARK-20426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978314#comment-15978314 ]
jin xing edited comment on SPARK-20426 at 4/21/17 8:35 AM: ----------------------------------------------------------- I posted 2 screenshots. External shuffle service of Spark is running under NodeManager. We can find: 1. OneForOneStreamManager is occupying nearly 2.5 G; 2. There are so many FileSegmentManagedBuffers(in "screenshot-2"); was (Author: jinxing6...@126.com): I posted 2 screenshots. External shuffle service of spark is running under NodeManager. We can find: 1. OneForOneStreamManager is occupying nearly 2.5 G; 2. There are so many FileSegmentManagedBuffers(in "screenshot-2"); > OneForOneStreamManager occupies too much memory. > ------------------------------------------------ > > Key: SPARK-20426 > URL: https://issues.apache.org/jira/browse/SPARK-20426 > Project: Spark > Issue Type: Improvement > Components: Shuffle > Affects Versions: 2.1.0 > Reporter: jin xing > Attachments: screenshot-1.png, screenshot-2.png > > > Spark jobs are running on yarn cluster in my warehouse. We enabled the > external shuffle service(*--conf spark.shuffle.service.enabled=true*). > Recently NodeManager runs OOM now and then. Dumping heap memory, we find that > *OneFroOneStreamManager*'s footprint is huge. NodeManager is configured with > 5G heap memory. While *OneForOneManager* costs 2.5G and there are 5503233 > *FileSegmentManagedBuffer*. Is there any suggestions to avoid this other than > just keep increasing NodeManager's memory? Is it possible to stop > *registerStream* in OneForOneStreamManager? Thus we don't need to cache so > many metadata(i.e. StreamState). -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org