[jira] [Assigned] (SPARK-40281) Memory Profiler on Executors

2022-11-10 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon reassigned SPARK-40281:


Assignee: Xinrong Meng

> Memory Profiler on Executors
> 
>
> Key: SPARK-40281
> URL: https://issues.apache.org/jira/browse/SPARK-40281
> Project: Spark
>  Issue Type: New Feature
>  Components: PySpark
>Affects Versions: 3.4.0
>Reporter: Xinrong Meng
>Assignee: Xinrong Meng
>Priority: Major
>
> The ticket proposes to implement PySpark memory profiling on executors. See 
> more 
> [design|https://docs.google.com/document/d/e/2PACX-1vR2K4TdrM1eAjNDC1bsflCNRH67UWLoC-lCv6TSUVXD91Ruksm99pYTnCeIm7Ui3RgrrRNcQU_D8-oh/pub].
> There are many factors in a PySpark program’s performance. Memory, as one of 
> the key factors of a program’s performance, had been missing in PySpark 
> profiling. A PySpark program on the Spark driver can be profiled with [Memory 
> Profiler|https://www.google.com/url?q=https://pypi.org/project/memory-profiler/&sa=D&source=editors&ust=1668027860192689&usg=AOvVaw1t4LRcObEGuhaTr5oHEUwU]
>  as a normal Python process, but there was not an easy way to profile memory 
> on Spark executors.
> PySpark UDFs, one of the most popular Python APIs, enable users to run custom 
> code on top of the Apache Spark™ engine. However, it is difficult to optimize 
> UDFs without understanding memory consumption.
> The ticket proposes to introduce the PySpark memory profiler, which profiles 
> memory on executors. It provides information about total memory usage and 
> pinpoints which lines of code in a UDF attribute to the most memory usage. 
> That will help optimize PySpark UDFs and reduce the likelihood of 
> out-of-memory errors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-40281) Memory Profiler on Executors

2022-11-09 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-40281:


Assignee: (was: Apache Spark)

> Memory Profiler on Executors
> 
>
> Key: SPARK-40281
> URL: https://issues.apache.org/jira/browse/SPARK-40281
> Project: Spark
>  Issue Type: New Feature
>  Components: PySpark
>Affects Versions: 3.4.0
>Reporter: Xinrong Meng
>Priority: Major
>
> Profiling is critical to performance engineering. Memory consumption is a key 
> indicator of how efficient a PySpark program is. There is an existing effort 
> on memory profiling of Python programs, Memory Profiler 
> ([https://pypi.org/project/memory-profiler/).|https://pypi.org/project/memory-profiler/]
> PySpark applications run as independent sets of processes on a cluster, 
> coordinated by the SparkContext object in the driver program. On the driver 
> side, PySpark is a regular Python process, thus, we can profile it as a 
> normal Python program using Memory Profiler.
> However, on the executors side, we are missing such memory profiler. Since 
> executors are distributed on different nodes in the cluster, we need to 
> aggregate profiles. Furthermore, Python worker processes are spawned per 
> executor for the Python/Pandas UDF execution, which makes the memory 
> profiling more intricate.
> The ticket proposes to implement a Memory Profiler on Executors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-40281) Memory Profiler on Executors

2022-11-09 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-40281:


Assignee: Apache Spark

> Memory Profiler on Executors
> 
>
> Key: SPARK-40281
> URL: https://issues.apache.org/jira/browse/SPARK-40281
> Project: Spark
>  Issue Type: New Feature
>  Components: PySpark
>Affects Versions: 3.4.0
>Reporter: Xinrong Meng
>Assignee: Apache Spark
>Priority: Major
>
> Profiling is critical to performance engineering. Memory consumption is a key 
> indicator of how efficient a PySpark program is. There is an existing effort 
> on memory profiling of Python programs, Memory Profiler 
> ([https://pypi.org/project/memory-profiler/).|https://pypi.org/project/memory-profiler/]
> PySpark applications run as independent sets of processes on a cluster, 
> coordinated by the SparkContext object in the driver program. On the driver 
> side, PySpark is a regular Python process, thus, we can profile it as a 
> normal Python program using Memory Profiler.
> However, on the executors side, we are missing such memory profiler. Since 
> executors are distributed on different nodes in the cluster, we need to 
> aggregate profiles. Furthermore, Python worker processes are spawned per 
> executor for the Python/Pandas UDF execution, which makes the memory 
> profiling more intricate.
> The ticket proposes to implement a Memory Profiler on Executors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-40281) Memory Profiler on Executors

2022-08-30 Thread Xinrong Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinrong Meng reassigned SPARK-40281:


Assignee: (was: Xinrong Meng)

> Memory Profiler on Executors
> 
>
> Key: SPARK-40281
> URL: https://issues.apache.org/jira/browse/SPARK-40281
> Project: Spark
>  Issue Type: Umbrella
>  Components: PySpark
>Affects Versions: 3.4.0
>Reporter: Xinrong Meng
>Priority: Major
>
> Profiling is critical to performance engineering. Memory consumption is a key 
> indicator of how efficient a PySpark program is. There is an existing effort 
> on memory profiling of Python progrms, Memory Profiler 
> ([https://pypi.org/project/memory-profiler/).|https://pypi.org/project/memory-profiler/]
> PySpark applications run as independent sets of processes on a cluster, 
> coordinated by the SparkContext object in the driver program. On the driver 
> side, PySpark is a regular Python process, thus, we can profile it as a 
> normal Python program using Memory Profiler.
> However, on the executors side, we are missing such memory profiler. Since 
> executors are distributed on different nodes in the cluster, we need to need 
> to aggregate profiles. Furthermore, Python worker processes are spawned per 
> executor for the Python/Pandas UDF execution, which makes the memory 
> profiling more intricate.
> The umbrella proposes to implement a Memory Profiler on Executors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-40281) Memory Profiler on Executors

2022-08-30 Thread Xinrong Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinrong Meng reassigned SPARK-40281:


Assignee: Xinrong Meng

> Memory Profiler on Executors
> 
>
> Key: SPARK-40281
> URL: https://issues.apache.org/jira/browse/SPARK-40281
> Project: Spark
>  Issue Type: Umbrella
>  Components: PySpark
>Affects Versions: 3.4.0
>Reporter: Xinrong Meng
>Assignee: Xinrong Meng
>Priority: Major
>
> Profiling is critical to performance engineering. Memory consumption is a key 
> indicator of how efficient a PySpark program is. There is an existing effort 
> on memory profiling of Python progrms, Memory Profiler 
> ([https://pypi.org/project/memory-profiler/).|https://pypi.org/project/memory-profiler/]
> PySpark applications run as independent sets of processes on a cluster, 
> coordinated by the SparkContext object in the driver program. On the driver 
> side, PySpark is a regular Python process, thus, we can profile it as a 
> normal Python program using Memory Profiler.
> However, on the executors side, we are missing such memory profiler. Since 
> executors are distributed on different nodes in the cluster, we need to need 
> to aggregate profiles. Furthermore, Python worker processes are spawned per 
> executor for the Python/Pandas UDF execution, which makes the memory 
> profiling more intricate.
> The umbrella proposes to implement a Memory Profiler on Executors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org