[jira] [Updated] (SPARK-6055) Memory leak in pyspark sql due to incorrect equality check

2015-02-27 Thread Davies Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davies Liu updated SPARK-6055:
--
Description: 
The __eq__ of DataType is not correct, class cache is not used correctly 
(created class can not be find by dataType), then it will create lots of 
classes (saved in _cached_cls), never released.

Also, all same DataType have same hash code, there will be many object in a 
dict with the same hash code, end with hash attach, it's very slow to access 
this dict (depends on the implementation of CPython).

  was:
The __eq__ of DataType is not correct, class cache is not use correctly 
(created class can not be find by dataType), then it will create lots of 
classes (saved in _cached_cls), never released.

Also, all same DataType have same hash code, there will be many object in a 
dict with the same hash code, end with hash attach, it's very slow to access 
this dict (depends on the implementation of CPython).


> Memory leak in pyspark sql due to incorrect equality check
> --
>
> Key: SPARK-6055
> URL: https://issues.apache.org/jira/browse/SPARK-6055
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, SQL
>Affects Versions: 1.1.1, 1.3.0, 1.2.1
>Reporter: Davies Liu
>Assignee: Davies Liu
>Priority: Blocker
>
> The __eq__ of DataType is not correct, class cache is not used correctly 
> (created class can not be find by dataType), then it will create lots of 
> classes (saved in _cached_cls), never released.
> Also, all same DataType have same hash code, there will be many object in a 
> dict with the same hash code, end with hash attach, it's very slow to access 
> this dict (depends on the implementation of CPython).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6055) Memory leak in pyspark sql due to incorrect equality check

2015-02-27 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated SPARK-6055:
---
Summary: Memory leak in pyspark sql due to incorrect equality check  (was: 
memory leak in pyspark sql)

> Memory leak in pyspark sql due to incorrect equality check
> --
>
> Key: SPARK-6055
> URL: https://issues.apache.org/jira/browse/SPARK-6055
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, SQL
>Affects Versions: 1.1.1, 1.3.0, 1.2.1
>Reporter: Davies Liu
>Assignee: Davies Liu
>Priority: Blocker
>
> The __eq__ of DataType is not correct, class cache is not use correctly 
> (created class can not be find by dataType), then it will create lots of 
> classes (saved in _cached_cls), never released.
> Also, all same DataType have same hash code, there will be many object in a 
> dict with the same hash code, end with hash attach, it's very slow to access 
> this dict (depends on the implementation of CPython).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org