[ 
https://issues.apache.org/jira/browse/SPARK-28761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Vogelbacher updated SPARK-28761:
--------------------------------------
    Description: 
Spark has a setting {{spark.driver.maxResultSize}}, see 
https://spark.apache.org/docs/latest/configuration.html#application-properties :
{noformat}
Limit of total size of serialized results of all partitions for each Spark 
action (e.g. collect) in bytes. Should be at least 1M, or 0 for unlimited. Jobs 
will be aborted if the total size is above this limit. Having a high limit may 
cause out-of-memory errors in driver (depends on spark.driver.memory and memory 
overhead of objects in JVM). Setting a proper limit can protect the driver from 
out-of-memory errors.
{noformat}
This setting can be very useful in constraining the memory that the spark 
driver needs for a specific spark action. However, this limit is checked before 
decompressing data in 
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala#L662

Even if the compressed data is below the limit the uncompressed data can still 
be far above. In order to protect the driver we should also impose a limit on 
the uncompressed data. We could do this in 
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala#L344
I propose adding a new config option {{spark.driver.maxUncompressedResultSize}}.

A simple repro of this with spark shell:
{noformat}
> printf 'a%.0s' {1..100000} > test.csv # create a 100 MB file
> ./bin/spark-shell --conf "spark.driver.maxResultSize=10000"
scala> val df = spark.read.format("csv").load("/Users/dvogelbacher/test.csv")
df: org.apache.spark.sql.DataFrame = [_c0: string]

scala> val results = df.collect()
results: Array[org.apache.spark.sql.Row] = 
Array([aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa...

scala> results(0).getString(0).size
res0: Int = 100000
{noformat}

Even though we set maxResultSize to 10 MB, we collect a result that is 100MB 
uncompressed.

  was:
Spark has a setting `spark.driver.maxResultSize`, see 
https://spark.apache.org/docs/latest/configuration.html#application-properties :
{noformat}
Limit of total size of serialized results of all partitions for each Spark 
action (e.g. collect) in bytes. Should be at least 1M, or 0 for unlimited. Jobs 
will be aborted if the total size is above this limit. Having a high limit may 
cause out-of-memory errors in driver (depends on spark.driver.memory and memory 
overhead of objects in JVM). Setting a proper limit can protect the driver from 
out-of-memory errors.
{noformat}
This setting can be very useful in constraining the memory that the spark 
driver needs for a specific spark action. However, this limit is checked before 
decompressing data in 
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala#L662

Even if the compressed data is below the limit the uncompressed data can still 
be far above. In order to protect the driver we should also impose a limit on 
the uncompressed data. We could do this in 
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala#L344
I propose adding a new config option {{spark.driver.maxUncompressedResultSize}}.

A simple repro of this with spark shell:
{noformat}
> printf 'a%.0s' {1..100000} > test.csv # create a 100 MB file
> ./bin/spark-shell --conf "spark.driver.maxResultSize=10000"
scala> val df = spark.read.format("csv").load("/Users/dvogelbacher/test.csv")
df: org.apache.spark.sql.DataFrame = [_c0: string]

scala> val results = df.collect()
results: Array[org.apache.spark.sql.Row] = 
Array([aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa...

scala> results(0).getString(0).size
res0: Int = 100000
{noformat}

Even though we set maxResultSize to 10 MB, we collect a result that is 100MB 
uncompressed.


> spark.driver.maxResultSize only applies to compressed data
> ----------------------------------------------------------
>
>                 Key: SPARK-28761
>                 URL: https://issues.apache.org/jira/browse/SPARK-28761
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.0.0
>            Reporter: David Vogelbacher
>            Priority: Major
>
> Spark has a setting {{spark.driver.maxResultSize}}, see 
> https://spark.apache.org/docs/latest/configuration.html#application-properties
>  :
> {noformat}
> Limit of total size of serialized results of all partitions for each Spark 
> action (e.g. collect) in bytes. Should be at least 1M, or 0 for unlimited. 
> Jobs will be aborted if the total size is above this limit. Having a high 
> limit may cause out-of-memory errors in driver (depends on 
> spark.driver.memory and memory overhead of objects in JVM). Setting a proper 
> limit can protect the driver from out-of-memory errors.
> {noformat}
> This setting can be very useful in constraining the memory that the spark 
> driver needs for a specific spark action. However, this limit is checked 
> before decompressing data in 
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala#L662
> Even if the compressed data is below the limit the uncompressed data can 
> still be far above. In order to protect the driver we should also impose a 
> limit on the uncompressed data. We could do this in 
> https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala#L344
> I propose adding a new config option 
> {{spark.driver.maxUncompressedResultSize}}.
> A simple repro of this with spark shell:
> {noformat}
> > printf 'a%.0s' {1..100000} > test.csv # create a 100 MB file
> > ./bin/spark-shell --conf "spark.driver.maxResultSize=10000"
> scala> val df = spark.read.format("csv").load("/Users/dvogelbacher/test.csv")
> df: org.apache.spark.sql.DataFrame = [_c0: string]
> scala> val results = df.collect()
> results: Array[org.apache.spark.sql.Row] = 
> Array([aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa...
> scala> results(0).getString(0).size
> res0: Int = 100000
> {noformat}
> Even though we set maxResultSize to 10 MB, we collect a result that is 100MB 
> uncompressed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to