[ https://issues.apache.org/jira/browse/SPARK-47336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17827078#comment-17827078 ]
Semyon Sinchenko commented on SPARK-47336: ------------------------------------------ [~grundprinzip-db] what do you think about `DataFrame.approximate_size_in_bytes() -> float` (or `DataFrame.approximateSizeInBytes() -> float`)? Or, for example, `DataFrame.approx_size_bytes()` to avoid very long names? P.S. I wold like to try to implement it, may you assign it on me? > Provide to PySpark a functionality to get estimated size of DataFrame in bytes > ------------------------------------------------------------------------------ > > Key: SPARK-47336 > URL: https://issues.apache.org/jira/browse/SPARK-47336 > Project: Spark > Issue Type: New Feature > Components: Connect, PySpark > Affects Versions: 4.0.0 > Reporter: Semyon Sinchenko > Priority: Minor > > Something equal to > sessionState().executePlan(...).optimizedPlan().stats().sizeInBytes() in > JVM-Spark. It may be done via simple call of `_jsparkSession` in a regular > PySpark and via a plugin for Spark Connect. > > This functionality is useful when one need to check a possibility of > broadcast join without modifying global broadcast threshold. > > The function in PySpark API may looks like: > `DataFrame.estimate_size_in_bytes() -> float` or > `DataFrame.estimateSizeInBytes() -> float`. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org