[jira] [Updated] (SPARK-3269) SparkSQLOperationManager.getNextRowSet OOMs when a large maxRows is set
[ https://issues.apache.org/jira/browse/SPARK-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Armbrust updated SPARK-3269: Assignee: Cheng Lian > SparkSQLOperationManager.getNextRowSet OOMs when a large maxRows is set > --- > > Key: SPARK-3269 > URL: https://issues.apache.org/jira/browse/SPARK-3269 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 1.0.2 >Reporter: Cheng Lian >Assignee: Cheng Lian > > {{SparkSQLOperationManager.getNextRowSet}} allocates an {{ArrayBuffer[Row]}} > as large as {{maxRows}}, which can lead to OOM if {{maxRows}} is large, even > if the actual size of the row set is much smaller. -- This message was sent by Atlassian JIRA (v6.2#6252) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-3269) SparkSQLOperationManager.getNextRowSet OOMs when a large maxRows is set
[ https://issues.apache.org/jira/browse/SPARK-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cheng Lian updated SPARK-3269: -- Description: {{SparkSQLOperationManager.getNextRowSet}} allocates an {{ArrayBuffer[Row]}} as large as {{maxRows}}, which can lead to OOM if {{maxRows}} is large, even if the actual size of the row set is much smaller. (was: {{SparkSQLOperationManager.getNextRowSet}} allocates an {{ArrayBuffer[Row]}} as large as {{maxRows}}, which can lead to OOM if {{maxRows}} is large.) > SparkSQLOperationManager.getNextRowSet OOMs when a large maxRows is set > --- > > Key: SPARK-3269 > URL: https://issues.apache.org/jira/browse/SPARK-3269 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 1.0.2 >Reporter: Cheng Lian > > {{SparkSQLOperationManager.getNextRowSet}} allocates an {{ArrayBuffer[Row]}} > as large as {{maxRows}}, which can lead to OOM if {{maxRows}} is large, even > if the actual size of the row set is much smaller. -- This message was sent by Atlassian JIRA (v6.2#6252) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-3269) SparkSQLOperationManager.getNextRowSet OOMs when a large maxRows is set
[ https://issues.apache.org/jira/browse/SPARK-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cheng Lian updated SPARK-3269: -- Affects Version/s: (was: 1.2.0) (was: 1.1.0) 1.0.2 > SparkSQLOperationManager.getNextRowSet OOMs when a large maxRows is set > --- > > Key: SPARK-3269 > URL: https://issues.apache.org/jira/browse/SPARK-3269 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 1.0.2 >Reporter: Cheng Lian > > {{SparkSQLOperationManager.getNextRowSet}} allocates an {{ArrayBuffer[Row]}} > as large as {{maxRows}}, which can lead to OOM if {{maxRows}} is large, even > if the actual size of the row set is much smaller. -- This message was sent by Atlassian JIRA (v6.2#6252) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-3269) SparkSQLOperationManager.getNextRowSet OOMs when a large maxRows is set
[ https://issues.apache.org/jira/browse/SPARK-3269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cheng Lian updated SPARK-3269: -- Target Version/s: 1.1.0 (was: 1.1.1) > SparkSQLOperationManager.getNextRowSet OOMs when a large maxRows is set > --- > > Key: SPARK-3269 > URL: https://issues.apache.org/jira/browse/SPARK-3269 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 1.0.2 >Reporter: Cheng Lian > > {{SparkSQLOperationManager.getNextRowSet}} allocates an {{ArrayBuffer[Row]}} > as large as {{maxRows}}, which can lead to OOM if {{maxRows}} is large, even > if the actual size of the row set is much smaller. -- This message was sent by Atlassian JIRA (v6.2#6252) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org