[jira] [Commented] (SPARK-27434) memory leak in spark driver

2019-04-11 Thread shahid (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16815424#comment-16815424
 ] 

shahid commented on SPARK-27434:


Could you please provide steps for reproducing the issue?

> memory leak in spark driver
> ---
>
> Key: SPARK-27434
> URL: https://issues.apache.org/jira/browse/SPARK-27434
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.4.0
> Environment: OS: Centos 7
> JVM: 
> **_openjdk version "1.8.0_201"_
> _OpenJDK Runtime Environment (IcedTea 3.11.0) (Alpine 8.201.08-r0)_
> _OpenJDK 64-Bit Server VM (build 25.201-b08, mixed mode)_
> Spark version: 2.4.0
>Reporter: Ryne Yang
>Priority: Major
> Attachments: Screen Shot 2019-04-10 at 12.11.35 PM.png
>
>
> we got a OOM exception on the driver after driver has completed multiple 
> jobs(we are reusing spark context). 
> so we took a heap dump and looked at the leak analysis, found out that under 
> AsyncEventQueue there are 3.5GB of heap allocated. Possibly a leak. 
>  
> can someone take a look at? 
> here is the heap analysis: 
> !Screen Shot 2019-04-10 at 12.11.35 PM.png!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27434) memory leak in spark driver

2019-04-12 Thread Ryne Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16816432#comment-16816432
 ] 

Ryne Yang commented on SPARK-27434:
---

[~shahid] yup:
 # start spark context with `spark.eventLog.enabled` set to true and logging 
path is under HDFS
 # do some work under that context
 # close spark context
 # repeat step 1

 

after a few loops, driver will have mem allocation high and will produce 
similar heap dump like the one I have attached. 

> memory leak in spark driver
> ---
>
> Key: SPARK-27434
> URL: https://issues.apache.org/jira/browse/SPARK-27434
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.4.0
> Environment: OS: Centos 7
> JVM: 
> **_openjdk version "1.8.0_201"_
> _OpenJDK Runtime Environment (IcedTea 3.11.0) (Alpine 8.201.08-r0)_
> _OpenJDK 64-Bit Server VM (build 25.201-b08, mixed mode)_
> Spark version: 2.4.0
>Reporter: Ryne Yang
>Priority: Major
> Attachments: Screen Shot 2019-04-10 at 12.11.35 PM.png
>
>
> we got a OOM exception on the driver after driver has completed multiple 
> jobs(we are reusing spark context). 
> so we took a heap dump and looked at the leak analysis, found out that under 
> AsyncEventQueue there are 3.5GB of heap allocated. Possibly a leak. 
>  
> can someone take a look at? 
> here is the heap analysis: 
> !Screen Shot 2019-04-10 at 12.11.35 PM.png!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27434) memory leak in spark driver

2019-04-17 Thread Ryne Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16820516#comment-16820516
 ] 

Ryne Yang commented on SPARK-27434:
---

[~shahid]  

were you able to reproduce this by the steps I provided? 

> memory leak in spark driver
> ---
>
> Key: SPARK-27434
> URL: https://issues.apache.org/jira/browse/SPARK-27434
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.4.0
> Environment: OS: Centos 7
> JVM: 
> **_openjdk version "1.8.0_201"_
> _OpenJDK Runtime Environment (IcedTea 3.11.0) (Alpine 8.201.08-r0)_
> _OpenJDK 64-Bit Server VM (build 25.201-b08, mixed mode)_
> Spark version: 2.4.0
>Reporter: Ryne Yang
>Priority: Major
> Attachments: Screen Shot 2019-04-10 at 12.11.35 PM.png
>
>
> we got a OOM exception on the driver after driver has completed multiple 
> jobs(we are reusing spark context). 
> so we took a heap dump and looked at the leak analysis, found out that under 
> AsyncEventQueue there are 3.5GB of heap allocated. Possibly a leak. 
>  
> can someone take a look at? 
> here is the heap analysis: 
> !Screen Shot 2019-04-10 at 12.11.35 PM.png!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27434) memory leak in spark driver

2019-04-25 Thread Ryne Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826425#comment-16826425
 ] 

Ryne Yang commented on SPARK-27434:
---

submitted a PR for this fix. 

[https://github.com/apache/spark/pull/24461]

> memory leak in spark driver
> ---
>
> Key: SPARK-27434
> URL: https://issues.apache.org/jira/browse/SPARK-27434
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.4.0
> Environment: OS: Centos 7
> JVM: 
> **_openjdk version "1.8.0_201"_
> _OpenJDK Runtime Environment (IcedTea 3.11.0) (Alpine 8.201.08-r0)_
> _OpenJDK 64-Bit Server VM (build 25.201-b08, mixed mode)_
> Spark version: 2.4.0
>Reporter: Ryne Yang
>Priority: Major
> Attachments: Screen Shot 2019-04-10 at 12.11.35 PM.png
>
>
> we got a OOM exception on the driver after driver has completed multiple 
> jobs(we are reusing spark context). 
> so we took a heap dump and looked at the leak analysis, found out that under 
> AsyncEventQueue there are 3.5GB of heap allocated. Possibly a leak. 
>  
> can someone take a look at? 
> here is the heap analysis: 
> !Screen Shot 2019-04-10 at 12.11.35 PM.png!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org