[ 
https://issues.apache.org/jira/browse/SPARK-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339846#comment-14339846
 ] 

Apache Spark commented on SPARK-6055:
-------------------------------------

User 'davies' has created a pull request for this issue:
https://github.com/apache/spark/pull/4809

> memory leak in pyspark sql
> --------------------------
>
>                 Key: SPARK-6055
>                 URL: https://issues.apache.org/jira/browse/SPARK-6055
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, SQL
>    Affects Versions: 1.1.1, 1.3.0, 1.2.1
>            Reporter: Davies Liu
>            Assignee: Davies Liu
>            Priority: Blocker
>
> The __eq__ of DataType is not correct, class cache is not use correctly 
> (created class can not be find by dataType), then it will create lots of 
> classes (saved in _cached_cls), never released.
> Also, all same DataType have same hash code, there will be many object in a 
> dict with the same hash code, end with hash attach, it's very slow to access 
> this dict (depends on the implementation of CPython).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to