[jira] [Commented] (SPARK-17405) Simple aggregation query OOMing after SPARK-16525

2016-09-08 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475040#comment-15475040
 ] 

Apache Spark commented on SPARK-17405:
--

User 'ericl' has created a pull request for this issue:
https://github.com/apache/spark/pull/15016

> Simple aggregation query OOMing after SPARK-16525
> -
>
> Key: SPARK-17405
> URL: https://issues.apache.org/jira/browse/SPARK-17405
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Josh Rosen
>Priority: Blocker
>
> Prior to SPARK-16525 / https://github.com/apache/spark/pull/14176, the 
> following query ran fine via Beeline / Thrift Server and the Spark shell, but 
> after that patch it is consistently OOMING:
> {code}
> CREATE TEMPORARY VIEW table_1(double_col_1, boolean_col_2, timestamp_col_3, 
> smallint_col_4, boolean_col_5, int_col_6, timestamp_col_7, varchar0008_col_8, 
> int_col_9, string_col_10) AS (
>   SELECT * FROM (VALUES
> (CAST(-147.818640624 AS DOUBLE), CAST(NULL AS BOOLEAN), 
> TIMESTAMP('2012-10-19 00:00:00.0'), CAST(9 AS SMALLINT), false, 77, 
> TIMESTAMP('2014-07-01 00:00:00.0'), '-945', -646, '722'),
> (CAST(594.195125271 AS DOUBLE), false, TIMESTAMP('2016-12-04 
> 00:00:00.0'), CAST(NULL AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS 
> INT), TIMESTAMP('1999-12-26 00:00:00.0'), '250', -861, '55'),
> (CAST(-454.171126363 AS DOUBLE), false, TIMESTAMP('2008-12-13 
> 00:00:00.0'), CAST(NULL AS SMALLINT), false, -783, TIMESTAMP('2010-05-28 
> 00:00:00.0'), '211', -959, CAST(NULL AS STRING)),
> (CAST(437.670945524 AS DOUBLE), true, TIMESTAMP('2011-10-16 00:00:00.0'), 
> CAST(952 AS SMALLINT), true, 297, TIMESTAMP('2013-01-13 00:00:00.0'), '262', 
> CAST(NULL AS INT), '936'),
> (CAST(-387.226759334 AS DOUBLE), false, TIMESTAMP('2019-10-03 
> 00:00:00.0'), CAST(-496 AS SMALLINT), CAST(NULL AS BOOLEAN), -925, 
> TIMESTAMP('2028-06-27 00:00:00.0'), '-657', 948, '18'),
> (CAST(-306.138230875 AS DOUBLE), true, TIMESTAMP('1997-10-07 
> 00:00:00.0'), CAST(332 AS SMALLINT), false, 744, TIMESTAMP('1990-09-22 
> 00:00:00.0'), '-345', 566, '-574'),
> (CAST(675.402140308 AS DOUBLE), false, TIMESTAMP('2017-06-26 
> 00:00:00.0'), CAST(972 AS SMALLINT), true, CAST(NULL AS INT), 
> TIMESTAMP('2026-06-10 00:00:00.0'), '518', 683, '-320'),
> (CAST(734.839647174 AS DOUBLE), true, TIMESTAMP('1995-06-01 00:00:00.0'), 
> CAST(-792 AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS INT), 
> TIMESTAMP('2021-07-11 00:00:00.0'), '-318', 564, '142')
>   ) as t);
> CREATE TEMPORARY VIEW table_3(string_col_1, float_col_2, timestamp_col_3, 
> boolean_col_4, timestamp_col_5, decimal3317_col_6) AS (
>   SELECT * FROM (VALUES
> ('88', CAST(191.92508 AS FLOAT), TIMESTAMP('1990-10-25 00:00:00.0'), 
> false, TIMESTAMP('1992-11-02 00:00:00.0'), CAST(NULL AS DECIMAL(33,17))),
> ('-419', CAST(-13.477915 AS FLOAT), TIMESTAMP('1996-03-02 00:00:00.0'), 
> true, CAST(NULL AS TIMESTAMP), -653.51000BD),
> ('970', CAST(-360.432 AS FLOAT), TIMESTAMP('2010-07-29 00:00:00.0'), 
> false, TIMESTAMP('1995-09-01 00:00:00.0'), -936.48000BD),
> ('807', CAST(814.30756 AS FLOAT), TIMESTAMP('2019-11-06 00:00:00.0'), 
> false, TIMESTAMP('1996-04-25 00:00:00.0'), 335.56000BD),
> ('-872', CAST(616.50525 AS FLOAT), TIMESTAMP('2011-08-28 00:00:00.0'), 
> false, TIMESTAMP('2003-07-19 00:00:00.0'), -951.18000BD),
> ('-167', CAST(-875.35675 AS FLOAT), TIMESTAMP('1995-07-14 00:00:00.0'), 
> false, TIMESTAMP('2005-11-29 00:00:00.0'), 224.89000BD)
>   ) as t);
> SELECT
> CAST(MIN(t2.smallint_col_4) AS STRING) AS char_col,
> LEAD(MAX((-387) + (727.64)), 90) OVER (PARTITION BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) ORDER BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) DESC, CAST(MIN(t2.smallint_col_4) AS 
> STRING)) AS decimal_col,
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9) AS int_col
> FROM table_3 t1
> INNER JOIN table_1 t2 ON (((t2.timestamp_col_3) = (t1.timestamp_col_5)) AND 
> ((t2.string_col_10) = (t1.string_col_1))) AND ((t2.string_col_10) = 
> (t1.string_col_1))
> WHERE
> (t2.smallint_col_4) IN (t2.int_col_9, t2.int_col_9)
> GROUP BY
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9);
> {code}
> Here's the OOM:
> {code}
> org.apache.hive.service.cli.HiveSQLException: 
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in 
> stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 
> (TID 9, localhost): java.lang.OutOfMemoryError: Unable to acquire 262144 
> bytes of memory, got 0
> at 
> org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:100)
> at 
> 

[jira] [Commented] (SPARK-17405) Simple aggregation query OOMing after SPARK-16525

2016-09-07 Thread Qifan Pu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472037#comment-15472037
 ] 

Qifan Pu commented on SPARK-17405:
--

[~joshrosen] Yes, running local[32] will reproduce the exception. 

> Simple aggregation query OOMing after SPARK-16525
> -
>
> Key: SPARK-17405
> URL: https://issues.apache.org/jira/browse/SPARK-17405
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Josh Rosen
>Priority: Blocker
>
> Prior to SPARK-16525 / https://github.com/apache/spark/pull/14176, the 
> following query ran fine via Beeline / Thrift Server and the Spark shell, but 
> after that patch it is consistently OOMING:
> {code}
> CREATE TEMPORARY VIEW table_1(double_col_1, boolean_col_2, timestamp_col_3, 
> smallint_col_4, boolean_col_5, int_col_6, timestamp_col_7, varchar0008_col_8, 
> int_col_9, string_col_10) AS (
>   SELECT * FROM (VALUES
> (CAST(-147.818640624 AS DOUBLE), CAST(NULL AS BOOLEAN), 
> TIMESTAMP('2012-10-19 00:00:00.0'), CAST(9 AS SMALLINT), false, 77, 
> TIMESTAMP('2014-07-01 00:00:00.0'), '-945', -646, '722'),
> (CAST(594.195125271 AS DOUBLE), false, TIMESTAMP('2016-12-04 
> 00:00:00.0'), CAST(NULL AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS 
> INT), TIMESTAMP('1999-12-26 00:00:00.0'), '250', -861, '55'),
> (CAST(-454.171126363 AS DOUBLE), false, TIMESTAMP('2008-12-13 
> 00:00:00.0'), CAST(NULL AS SMALLINT), false, -783, TIMESTAMP('2010-05-28 
> 00:00:00.0'), '211', -959, CAST(NULL AS STRING)),
> (CAST(437.670945524 AS DOUBLE), true, TIMESTAMP('2011-10-16 00:00:00.0'), 
> CAST(952 AS SMALLINT), true, 297, TIMESTAMP('2013-01-13 00:00:00.0'), '262', 
> CAST(NULL AS INT), '936'),
> (CAST(-387.226759334 AS DOUBLE), false, TIMESTAMP('2019-10-03 
> 00:00:00.0'), CAST(-496 AS SMALLINT), CAST(NULL AS BOOLEAN), -925, 
> TIMESTAMP('2028-06-27 00:00:00.0'), '-657', 948, '18'),
> (CAST(-306.138230875 AS DOUBLE), true, TIMESTAMP('1997-10-07 
> 00:00:00.0'), CAST(332 AS SMALLINT), false, 744, TIMESTAMP('1990-09-22 
> 00:00:00.0'), '-345', 566, '-574'),
> (CAST(675.402140308 AS DOUBLE), false, TIMESTAMP('2017-06-26 
> 00:00:00.0'), CAST(972 AS SMALLINT), true, CAST(NULL AS INT), 
> TIMESTAMP('2026-06-10 00:00:00.0'), '518', 683, '-320'),
> (CAST(734.839647174 AS DOUBLE), true, TIMESTAMP('1995-06-01 00:00:00.0'), 
> CAST(-792 AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS INT), 
> TIMESTAMP('2021-07-11 00:00:00.0'), '-318', 564, '142')
>   ) as t);
> CREATE TEMPORARY VIEW table_3(string_col_1, float_col_2, timestamp_col_3, 
> boolean_col_4, timestamp_col_5, decimal3317_col_6) AS (
>   SELECT * FROM (VALUES
> ('88', CAST(191.92508 AS FLOAT), TIMESTAMP('1990-10-25 00:00:00.0'), 
> false, TIMESTAMP('1992-11-02 00:00:00.0'), CAST(NULL AS DECIMAL(33,17))),
> ('-419', CAST(-13.477915 AS FLOAT), TIMESTAMP('1996-03-02 00:00:00.0'), 
> true, CAST(NULL AS TIMESTAMP), -653.51000BD),
> ('970', CAST(-360.432 AS FLOAT), TIMESTAMP('2010-07-29 00:00:00.0'), 
> false, TIMESTAMP('1995-09-01 00:00:00.0'), -936.48000BD),
> ('807', CAST(814.30756 AS FLOAT), TIMESTAMP('2019-11-06 00:00:00.0'), 
> false, TIMESTAMP('1996-04-25 00:00:00.0'), 335.56000BD),
> ('-872', CAST(616.50525 AS FLOAT), TIMESTAMP('2011-08-28 00:00:00.0'), 
> false, TIMESTAMP('2003-07-19 00:00:00.0'), -951.18000BD),
> ('-167', CAST(-875.35675 AS FLOAT), TIMESTAMP('1995-07-14 00:00:00.0'), 
> false, TIMESTAMP('2005-11-29 00:00:00.0'), 224.89000BD)
>   ) as t);
> SELECT
> CAST(MIN(t2.smallint_col_4) AS STRING) AS char_col,
> LEAD(MAX((-387) + (727.64)), 90) OVER (PARTITION BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) ORDER BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) DESC, CAST(MIN(t2.smallint_col_4) AS 
> STRING)) AS decimal_col,
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9) AS int_col
> FROM table_3 t1
> INNER JOIN table_1 t2 ON (((t2.timestamp_col_3) = (t1.timestamp_col_5)) AND 
> ((t2.string_col_10) = (t1.string_col_1))) AND ((t2.string_col_10) = 
> (t1.string_col_1))
> WHERE
> (t2.smallint_col_4) IN (t2.int_col_9, t2.int_col_9)
> GROUP BY
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9);
> {code}
> Here's the OOM:
> {code}
> org.apache.hive.service.cli.HiveSQLException: 
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in 
> stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 
> (TID 9, localhost): java.lang.OutOfMemoryError: Unable to acquire 262144 
> bytes of memory, got 0
> at 
> org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:100)
> at 
> org.apache.spark.unsafe.map.BytesToBytesMap.allocate(BytesToBytesMap.java:783)
> at 
> 

[jira] [Commented] (SPARK-17405) Simple aggregation query OOMing after SPARK-16525

2016-09-07 Thread Qifan Pu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472009#comment-15472009
 ] 

Qifan Pu commented on SPARK-17405:
--

One quick fix is to set memory capacity in configuration to make sure 
memory_capacity > x*cores (x being some number > 64MB)

> Simple aggregation query OOMing after SPARK-16525
> -
>
> Key: SPARK-17405
> URL: https://issues.apache.org/jira/browse/SPARK-17405
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Josh Rosen
>Priority: Blocker
>
> Prior to SPARK-16525 / https://github.com/apache/spark/pull/14176, the 
> following query ran fine via Beeline / Thrift Server and the Spark shell, but 
> after that patch it is consistently OOMING:
> {code}
> CREATE TEMPORARY VIEW table_1(double_col_1, boolean_col_2, timestamp_col_3, 
> smallint_col_4, boolean_col_5, int_col_6, timestamp_col_7, varchar0008_col_8, 
> int_col_9, string_col_10) AS (
>   SELECT * FROM (VALUES
> (CAST(-147.818640624 AS DOUBLE), CAST(NULL AS BOOLEAN), 
> TIMESTAMP('2012-10-19 00:00:00.0'), CAST(9 AS SMALLINT), false, 77, 
> TIMESTAMP('2014-07-01 00:00:00.0'), '-945', -646, '722'),
> (CAST(594.195125271 AS DOUBLE), false, TIMESTAMP('2016-12-04 
> 00:00:00.0'), CAST(NULL AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS 
> INT), TIMESTAMP('1999-12-26 00:00:00.0'), '250', -861, '55'),
> (CAST(-454.171126363 AS DOUBLE), false, TIMESTAMP('2008-12-13 
> 00:00:00.0'), CAST(NULL AS SMALLINT), false, -783, TIMESTAMP('2010-05-28 
> 00:00:00.0'), '211', -959, CAST(NULL AS STRING)),
> (CAST(437.670945524 AS DOUBLE), true, TIMESTAMP('2011-10-16 00:00:00.0'), 
> CAST(952 AS SMALLINT), true, 297, TIMESTAMP('2013-01-13 00:00:00.0'), '262', 
> CAST(NULL AS INT), '936'),
> (CAST(-387.226759334 AS DOUBLE), false, TIMESTAMP('2019-10-03 
> 00:00:00.0'), CAST(-496 AS SMALLINT), CAST(NULL AS BOOLEAN), -925, 
> TIMESTAMP('2028-06-27 00:00:00.0'), '-657', 948, '18'),
> (CAST(-306.138230875 AS DOUBLE), true, TIMESTAMP('1997-10-07 
> 00:00:00.0'), CAST(332 AS SMALLINT), false, 744, TIMESTAMP('1990-09-22 
> 00:00:00.0'), '-345', 566, '-574'),
> (CAST(675.402140308 AS DOUBLE), false, TIMESTAMP('2017-06-26 
> 00:00:00.0'), CAST(972 AS SMALLINT), true, CAST(NULL AS INT), 
> TIMESTAMP('2026-06-10 00:00:00.0'), '518', 683, '-320'),
> (CAST(734.839647174 AS DOUBLE), true, TIMESTAMP('1995-06-01 00:00:00.0'), 
> CAST(-792 AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS INT), 
> TIMESTAMP('2021-07-11 00:00:00.0'), '-318', 564, '142')
>   ) as t);
> CREATE TEMPORARY VIEW table_3(string_col_1, float_col_2, timestamp_col_3, 
> boolean_col_4, timestamp_col_5, decimal3317_col_6) AS (
>   SELECT * FROM (VALUES
> ('88', CAST(191.92508 AS FLOAT), TIMESTAMP('1990-10-25 00:00:00.0'), 
> false, TIMESTAMP('1992-11-02 00:00:00.0'), CAST(NULL AS DECIMAL(33,17))),
> ('-419', CAST(-13.477915 AS FLOAT), TIMESTAMP('1996-03-02 00:00:00.0'), 
> true, CAST(NULL AS TIMESTAMP), -653.51000BD),
> ('970', CAST(-360.432 AS FLOAT), TIMESTAMP('2010-07-29 00:00:00.0'), 
> false, TIMESTAMP('1995-09-01 00:00:00.0'), -936.48000BD),
> ('807', CAST(814.30756 AS FLOAT), TIMESTAMP('2019-11-06 00:00:00.0'), 
> false, TIMESTAMP('1996-04-25 00:00:00.0'), 335.56000BD),
> ('-872', CAST(616.50525 AS FLOAT), TIMESTAMP('2011-08-28 00:00:00.0'), 
> false, TIMESTAMP('2003-07-19 00:00:00.0'), -951.18000BD),
> ('-167', CAST(-875.35675 AS FLOAT), TIMESTAMP('1995-07-14 00:00:00.0'), 
> false, TIMESTAMP('2005-11-29 00:00:00.0'), 224.89000BD)
>   ) as t);
> SELECT
> CAST(MIN(t2.smallint_col_4) AS STRING) AS char_col,
> LEAD(MAX((-387) + (727.64)), 90) OVER (PARTITION BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) ORDER BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) DESC, CAST(MIN(t2.smallint_col_4) AS 
> STRING)) AS decimal_col,
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9) AS int_col
> FROM table_3 t1
> INNER JOIN table_1 t2 ON (((t2.timestamp_col_3) = (t1.timestamp_col_5)) AND 
> ((t2.string_col_10) = (t1.string_col_1))) AND ((t2.string_col_10) = 
> (t1.string_col_1))
> WHERE
> (t2.smallint_col_4) IN (t2.int_col_9, t2.int_col_9)
> GROUP BY
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9);
> {code}
> Here's the OOM:
> {code}
> org.apache.hive.service.cli.HiveSQLException: 
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in 
> stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 
> (TID 9, localhost): java.lang.OutOfMemoryError: Unable to acquire 262144 
> bytes of memory, got 0
> at 
> org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:100)
> at 
> 

[jira] [Commented] (SPARK-17405) Simple aggregation query OOMing after SPARK-16525

2016-09-07 Thread Qifan Pu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471747#comment-15471747
 ] 

Qifan Pu commented on SPARK-17405:
--

[~joshrosen]
Yes likely. The new hashmap asks for 64MB per task, and the default single-node 
setting uses only hundreds of memory in total.
We decided on 64MB due to our single memory page design for simplicity and 
performance, and that in production it should hold 64MB * cores << 
memory_capacity.
Maybe we should increase default memory a bit? Or is it bad in general to have 
such upfront cost of 64MB?

> Simple aggregation query OOMing after SPARK-16525
> -
>
> Key: SPARK-17405
> URL: https://issues.apache.org/jira/browse/SPARK-17405
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Josh Rosen
>Priority: Blocker
>
> Prior to SPARK-16525 / https://github.com/apache/spark/pull/14176, the 
> following query ran fine via Beeline / Thrift Server and the Spark shell, but 
> after that patch it is consistently OOMING:
> {code}
> CREATE TEMPORARY VIEW table_1(double_col_1, boolean_col_2, timestamp_col_3, 
> smallint_col_4, boolean_col_5, int_col_6, timestamp_col_7, varchar0008_col_8, 
> int_col_9, string_col_10) AS (
>   SELECT * FROM (VALUES
> (CAST(-147.818640624 AS DOUBLE), CAST(NULL AS BOOLEAN), 
> TIMESTAMP('2012-10-19 00:00:00.0'), CAST(9 AS SMALLINT), false, 77, 
> TIMESTAMP('2014-07-01 00:00:00.0'), '-945', -646, '722'),
> (CAST(594.195125271 AS DOUBLE), false, TIMESTAMP('2016-12-04 
> 00:00:00.0'), CAST(NULL AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS 
> INT), TIMESTAMP('1999-12-26 00:00:00.0'), '250', -861, '55'),
> (CAST(-454.171126363 AS DOUBLE), false, TIMESTAMP('2008-12-13 
> 00:00:00.0'), CAST(NULL AS SMALLINT), false, -783, TIMESTAMP('2010-05-28 
> 00:00:00.0'), '211', -959, CAST(NULL AS STRING)),
> (CAST(437.670945524 AS DOUBLE), true, TIMESTAMP('2011-10-16 00:00:00.0'), 
> CAST(952 AS SMALLINT), true, 297, TIMESTAMP('2013-01-13 00:00:00.0'), '262', 
> CAST(NULL AS INT), '936'),
> (CAST(-387.226759334 AS DOUBLE), false, TIMESTAMP('2019-10-03 
> 00:00:00.0'), CAST(-496 AS SMALLINT), CAST(NULL AS BOOLEAN), -925, 
> TIMESTAMP('2028-06-27 00:00:00.0'), '-657', 948, '18'),
> (CAST(-306.138230875 AS DOUBLE), true, TIMESTAMP('1997-10-07 
> 00:00:00.0'), CAST(332 AS SMALLINT), false, 744, TIMESTAMP('1990-09-22 
> 00:00:00.0'), '-345', 566, '-574'),
> (CAST(675.402140308 AS DOUBLE), false, TIMESTAMP('2017-06-26 
> 00:00:00.0'), CAST(972 AS SMALLINT), true, CAST(NULL AS INT), 
> TIMESTAMP('2026-06-10 00:00:00.0'), '518', 683, '-320'),
> (CAST(734.839647174 AS DOUBLE), true, TIMESTAMP('1995-06-01 00:00:00.0'), 
> CAST(-792 AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS INT), 
> TIMESTAMP('2021-07-11 00:00:00.0'), '-318', 564, '142')
>   ) as t);
> CREATE TEMPORARY VIEW table_3(string_col_1, float_col_2, timestamp_col_3, 
> boolean_col_4, timestamp_col_5, decimal3317_col_6) AS (
>   SELECT * FROM (VALUES
> ('88', CAST(191.92508 AS FLOAT), TIMESTAMP('1990-10-25 00:00:00.0'), 
> false, TIMESTAMP('1992-11-02 00:00:00.0'), CAST(NULL AS DECIMAL(33,17))),
> ('-419', CAST(-13.477915 AS FLOAT), TIMESTAMP('1996-03-02 00:00:00.0'), 
> true, CAST(NULL AS TIMESTAMP), -653.51000BD),
> ('970', CAST(-360.432 AS FLOAT), TIMESTAMP('2010-07-29 00:00:00.0'), 
> false, TIMESTAMP('1995-09-01 00:00:00.0'), -936.48000BD),
> ('807', CAST(814.30756 AS FLOAT), TIMESTAMP('2019-11-06 00:00:00.0'), 
> false, TIMESTAMP('1996-04-25 00:00:00.0'), 335.56000BD),
> ('-872', CAST(616.50525 AS FLOAT), TIMESTAMP('2011-08-28 00:00:00.0'), 
> false, TIMESTAMP('2003-07-19 00:00:00.0'), -951.18000BD),
> ('-167', CAST(-875.35675 AS FLOAT), TIMESTAMP('1995-07-14 00:00:00.0'), 
> false, TIMESTAMP('2005-11-29 00:00:00.0'), 224.89000BD)
>   ) as t);
> SELECT
> CAST(MIN(t2.smallint_col_4) AS STRING) AS char_col,
> LEAD(MAX((-387) + (727.64)), 90) OVER (PARTITION BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) ORDER BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) DESC, CAST(MIN(t2.smallint_col_4) AS 
> STRING)) AS decimal_col,
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9) AS int_col
> FROM table_3 t1
> INNER JOIN table_1 t2 ON (((t2.timestamp_col_3) = (t1.timestamp_col_5)) AND 
> ((t2.string_col_10) = (t1.string_col_1))) AND ((t2.string_col_10) = 
> (t1.string_col_1))
> WHERE
> (t2.smallint_col_4) IN (t2.int_col_9, t2.int_col_9)
> GROUP BY
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9);
> {code}
> Here's the OOM:
> {code}
> org.apache.hive.service.cli.HiveSQLException: 
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in 
> stage 1.0 failed 1 times, most 

[jira] [Commented] (SPARK-17405) Simple aggregation query OOMing after SPARK-16525

2016-09-07 Thread Josh Rosen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471690#comment-15471690
 ] 

Josh Rosen commented on SPARK-17405:


My hunch is that this is affected by the default number of cores in local mode: 
I think that my MBP uses 16 tasks by default, while I think that the default 
parallelism is lower in Jenkins (and perhaps on your machine). If you have 
trouble reproducing this issue then I'd try explicitly running {{local\[16]}} 
or {{local\[32]}} to see if that can reproduce the issue.

> Simple aggregation query OOMing after SPARK-16525
> -
>
> Key: SPARK-17405
> URL: https://issues.apache.org/jira/browse/SPARK-17405
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Josh Rosen
>Priority: Blocker
>
> Prior to SPARK-16525 / https://github.com/apache/spark/pull/14176, the 
> following query ran fine via Beeline / Thrift Server and the Spark shell, but 
> after that patch it is consistently OOMING:
> {code}
> CREATE TEMPORARY VIEW table_1(double_col_1, boolean_col_2, timestamp_col_3, 
> smallint_col_4, boolean_col_5, int_col_6, timestamp_col_7, varchar0008_col_8, 
> int_col_9, string_col_10) AS (
>   SELECT * FROM (VALUES
> (CAST(-147.818640624 AS DOUBLE), CAST(NULL AS BOOLEAN), 
> TIMESTAMP('2012-10-19 00:00:00.0'), CAST(9 AS SMALLINT), false, 77, 
> TIMESTAMP('2014-07-01 00:00:00.0'), '-945', -646, '722'),
> (CAST(594.195125271 AS DOUBLE), false, TIMESTAMP('2016-12-04 
> 00:00:00.0'), CAST(NULL AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS 
> INT), TIMESTAMP('1999-12-26 00:00:00.0'), '250', -861, '55'),
> (CAST(-454.171126363 AS DOUBLE), false, TIMESTAMP('2008-12-13 
> 00:00:00.0'), CAST(NULL AS SMALLINT), false, -783, TIMESTAMP('2010-05-28 
> 00:00:00.0'), '211', -959, CAST(NULL AS STRING)),
> (CAST(437.670945524 AS DOUBLE), true, TIMESTAMP('2011-10-16 00:00:00.0'), 
> CAST(952 AS SMALLINT), true, 297, TIMESTAMP('2013-01-13 00:00:00.0'), '262', 
> CAST(NULL AS INT), '936'),
> (CAST(-387.226759334 AS DOUBLE), false, TIMESTAMP('2019-10-03 
> 00:00:00.0'), CAST(-496 AS SMALLINT), CAST(NULL AS BOOLEAN), -925, 
> TIMESTAMP('2028-06-27 00:00:00.0'), '-657', 948, '18'),
> (CAST(-306.138230875 AS DOUBLE), true, TIMESTAMP('1997-10-07 
> 00:00:00.0'), CAST(332 AS SMALLINT), false, 744, TIMESTAMP('1990-09-22 
> 00:00:00.0'), '-345', 566, '-574'),
> (CAST(675.402140308 AS DOUBLE), false, TIMESTAMP('2017-06-26 
> 00:00:00.0'), CAST(972 AS SMALLINT), true, CAST(NULL AS INT), 
> TIMESTAMP('2026-06-10 00:00:00.0'), '518', 683, '-320'),
> (CAST(734.839647174 AS DOUBLE), true, TIMESTAMP('1995-06-01 00:00:00.0'), 
> CAST(-792 AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS INT), 
> TIMESTAMP('2021-07-11 00:00:00.0'), '-318', 564, '142')
>   ) as t);
> CREATE TEMPORARY VIEW table_3(string_col_1, float_col_2, timestamp_col_3, 
> boolean_col_4, timestamp_col_5, decimal3317_col_6) AS (
>   SELECT * FROM (VALUES
> ('88', CAST(191.92508 AS FLOAT), TIMESTAMP('1990-10-25 00:00:00.0'), 
> false, TIMESTAMP('1992-11-02 00:00:00.0'), CAST(NULL AS DECIMAL(33,17))),
> ('-419', CAST(-13.477915 AS FLOAT), TIMESTAMP('1996-03-02 00:00:00.0'), 
> true, CAST(NULL AS TIMESTAMP), -653.51000BD),
> ('970', CAST(-360.432 AS FLOAT), TIMESTAMP('2010-07-29 00:00:00.0'), 
> false, TIMESTAMP('1995-09-01 00:00:00.0'), -936.48000BD),
> ('807', CAST(814.30756 AS FLOAT), TIMESTAMP('2019-11-06 00:00:00.0'), 
> false, TIMESTAMP('1996-04-25 00:00:00.0'), 335.56000BD),
> ('-872', CAST(616.50525 AS FLOAT), TIMESTAMP('2011-08-28 00:00:00.0'), 
> false, TIMESTAMP('2003-07-19 00:00:00.0'), -951.18000BD),
> ('-167', CAST(-875.35675 AS FLOAT), TIMESTAMP('1995-07-14 00:00:00.0'), 
> false, TIMESTAMP('2005-11-29 00:00:00.0'), 224.89000BD)
>   ) as t);
> SELECT
> CAST(MIN(t2.smallint_col_4) AS STRING) AS char_col,
> LEAD(MAX((-387) + (727.64)), 90) OVER (PARTITION BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) ORDER BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) DESC, CAST(MIN(t2.smallint_col_4) AS 
> STRING)) AS decimal_col,
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9) AS int_col
> FROM table_3 t1
> INNER JOIN table_1 t2 ON (((t2.timestamp_col_3) = (t1.timestamp_col_5)) AND 
> ((t2.string_col_10) = (t1.string_col_1))) AND ((t2.string_col_10) = 
> (t1.string_col_1))
> WHERE
> (t2.smallint_col_4) IN (t2.int_col_9, t2.int_col_9)
> GROUP BY
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9);
> {code}
> Here's the OOM:
> {code}
> org.apache.hive.service.cli.HiveSQLException: 
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in 
> stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in 

[jira] [Commented] (SPARK-17405) Simple aggregation query OOMing after SPARK-16525

2016-09-07 Thread Qifan Pu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471640#comment-15471640
 ] 

Qifan Pu commented on SPARK-17405:
--

[~joshrosen][~jlaskowski]Thanks for the comments and suggestions. I have run 
both of your queries on 03d77af9ec4ce9a42affd6ab4381ae5bd3c79a5a and was able 
to finish both of them without any exceptions. 
I'll do some static code analysis based on the log from [~jlaskowski]

> Simple aggregation query OOMing after SPARK-16525
> -
>
> Key: SPARK-17405
> URL: https://issues.apache.org/jira/browse/SPARK-17405
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Josh Rosen
>Priority: Blocker
>
> Prior to SPARK-16525 / https://github.com/apache/spark/pull/14176, the 
> following query ran fine via Beeline / Thrift Server and the Spark shell, but 
> after that patch it is consistently OOMING:
> {code}
> CREATE TEMPORARY VIEW table_1(double_col_1, boolean_col_2, timestamp_col_3, 
> smallint_col_4, boolean_col_5, int_col_6, timestamp_col_7, varchar0008_col_8, 
> int_col_9, string_col_10) AS (
>   SELECT * FROM (VALUES
> (CAST(-147.818640624 AS DOUBLE), CAST(NULL AS BOOLEAN), 
> TIMESTAMP('2012-10-19 00:00:00.0'), CAST(9 AS SMALLINT), false, 77, 
> TIMESTAMP('2014-07-01 00:00:00.0'), '-945', -646, '722'),
> (CAST(594.195125271 AS DOUBLE), false, TIMESTAMP('2016-12-04 
> 00:00:00.0'), CAST(NULL AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS 
> INT), TIMESTAMP('1999-12-26 00:00:00.0'), '250', -861, '55'),
> (CAST(-454.171126363 AS DOUBLE), false, TIMESTAMP('2008-12-13 
> 00:00:00.0'), CAST(NULL AS SMALLINT), false, -783, TIMESTAMP('2010-05-28 
> 00:00:00.0'), '211', -959, CAST(NULL AS STRING)),
> (CAST(437.670945524 AS DOUBLE), true, TIMESTAMP('2011-10-16 00:00:00.0'), 
> CAST(952 AS SMALLINT), true, 297, TIMESTAMP('2013-01-13 00:00:00.0'), '262', 
> CAST(NULL AS INT), '936'),
> (CAST(-387.226759334 AS DOUBLE), false, TIMESTAMP('2019-10-03 
> 00:00:00.0'), CAST(-496 AS SMALLINT), CAST(NULL AS BOOLEAN), -925, 
> TIMESTAMP('2028-06-27 00:00:00.0'), '-657', 948, '18'),
> (CAST(-306.138230875 AS DOUBLE), true, TIMESTAMP('1997-10-07 
> 00:00:00.0'), CAST(332 AS SMALLINT), false, 744, TIMESTAMP('1990-09-22 
> 00:00:00.0'), '-345', 566, '-574'),
> (CAST(675.402140308 AS DOUBLE), false, TIMESTAMP('2017-06-26 
> 00:00:00.0'), CAST(972 AS SMALLINT), true, CAST(NULL AS INT), 
> TIMESTAMP('2026-06-10 00:00:00.0'), '518', 683, '-320'),
> (CAST(734.839647174 AS DOUBLE), true, TIMESTAMP('1995-06-01 00:00:00.0'), 
> CAST(-792 AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS INT), 
> TIMESTAMP('2021-07-11 00:00:00.0'), '-318', 564, '142')
>   ) as t);
> CREATE TEMPORARY VIEW table_3(string_col_1, float_col_2, timestamp_col_3, 
> boolean_col_4, timestamp_col_5, decimal3317_col_6) AS (
>   SELECT * FROM (VALUES
> ('88', CAST(191.92508 AS FLOAT), TIMESTAMP('1990-10-25 00:00:00.0'), 
> false, TIMESTAMP('1992-11-02 00:00:00.0'), CAST(NULL AS DECIMAL(33,17))),
> ('-419', CAST(-13.477915 AS FLOAT), TIMESTAMP('1996-03-02 00:00:00.0'), 
> true, CAST(NULL AS TIMESTAMP), -653.51000BD),
> ('970', CAST(-360.432 AS FLOAT), TIMESTAMP('2010-07-29 00:00:00.0'), 
> false, TIMESTAMP('1995-09-01 00:00:00.0'), -936.48000BD),
> ('807', CAST(814.30756 AS FLOAT), TIMESTAMP('2019-11-06 00:00:00.0'), 
> false, TIMESTAMP('1996-04-25 00:00:00.0'), 335.56000BD),
> ('-872', CAST(616.50525 AS FLOAT), TIMESTAMP('2011-08-28 00:00:00.0'), 
> false, TIMESTAMP('2003-07-19 00:00:00.0'), -951.18000BD),
> ('-167', CAST(-875.35675 AS FLOAT), TIMESTAMP('1995-07-14 00:00:00.0'), 
> false, TIMESTAMP('2005-11-29 00:00:00.0'), 224.89000BD)
>   ) as t);
> SELECT
> CAST(MIN(t2.smallint_col_4) AS STRING) AS char_col,
> LEAD(MAX((-387) + (727.64)), 90) OVER (PARTITION BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) ORDER BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) DESC, CAST(MIN(t2.smallint_col_4) AS 
> STRING)) AS decimal_col,
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9) AS int_col
> FROM table_3 t1
> INNER JOIN table_1 t2 ON (((t2.timestamp_col_3) = (t1.timestamp_col_5)) AND 
> ((t2.string_col_10) = (t1.string_col_1))) AND ((t2.string_col_10) = 
> (t1.string_col_1))
> WHERE
> (t2.smallint_col_4) IN (t2.int_col_9, t2.int_col_9)
> GROUP BY
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9);
> {code}
> Here's the OOM:
> {code}
> org.apache.hive.service.cli.HiveSQLException: 
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in 
> stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 
> (TID 9, localhost): java.lang.OutOfMemoryError: Unable to acquire 262144 
> bytes of 

[jira] [Commented] (SPARK-17405) Simple aggregation query OOMing after SPARK-16525

2016-09-06 Thread Jacek Laskowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469487#comment-15469487
 ] 

Jacek Laskowski commented on SPARK-17405:
-

It definitely got better with the build today Sept, 7th. Yesterday, even such a 
simple query died {{Seq(1).toDF.groupBy('value).count.show}}.

> Simple aggregation query OOMing after SPARK-16525
> -
>
> Key: SPARK-17405
> URL: https://issues.apache.org/jira/browse/SPARK-17405
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Josh Rosen
>Priority: Blocker
>
> Prior to SPARK-16525 / https://github.com/apache/spark/pull/14176, the 
> following query ran fine via Beeline / Thrift Server and the Spark shell, but 
> after that patch it is consistently OOMING:
> {code}
> CREATE TEMPORARY VIEW table_1(double_col_1, boolean_col_2, timestamp_col_3, 
> smallint_col_4, boolean_col_5, int_col_6, timestamp_col_7, varchar0008_col_8, 
> int_col_9, string_col_10) AS (
>   SELECT * FROM (VALUES
> (CAST(-147.818640624 AS DOUBLE), CAST(NULL AS BOOLEAN), 
> TIMESTAMP('2012-10-19 00:00:00.0'), CAST(9 AS SMALLINT), false, 77, 
> TIMESTAMP('2014-07-01 00:00:00.0'), '-945', -646, '722'),
> (CAST(594.195125271 AS DOUBLE), false, TIMESTAMP('2016-12-04 
> 00:00:00.0'), CAST(NULL AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS 
> INT), TIMESTAMP('1999-12-26 00:00:00.0'), '250', -861, '55'),
> (CAST(-454.171126363 AS DOUBLE), false, TIMESTAMP('2008-12-13 
> 00:00:00.0'), CAST(NULL AS SMALLINT), false, -783, TIMESTAMP('2010-05-28 
> 00:00:00.0'), '211', -959, CAST(NULL AS STRING)),
> (CAST(437.670945524 AS DOUBLE), true, TIMESTAMP('2011-10-16 00:00:00.0'), 
> CAST(952 AS SMALLINT), true, 297, TIMESTAMP('2013-01-13 00:00:00.0'), '262', 
> CAST(NULL AS INT), '936'),
> (CAST(-387.226759334 AS DOUBLE), false, TIMESTAMP('2019-10-03 
> 00:00:00.0'), CAST(-496 AS SMALLINT), CAST(NULL AS BOOLEAN), -925, 
> TIMESTAMP('2028-06-27 00:00:00.0'), '-657', 948, '18'),
> (CAST(-306.138230875 AS DOUBLE), true, TIMESTAMP('1997-10-07 
> 00:00:00.0'), CAST(332 AS SMALLINT), false, 744, TIMESTAMP('1990-09-22 
> 00:00:00.0'), '-345', 566, '-574'),
> (CAST(675.402140308 AS DOUBLE), false, TIMESTAMP('2017-06-26 
> 00:00:00.0'), CAST(972 AS SMALLINT), true, CAST(NULL AS INT), 
> TIMESTAMP('2026-06-10 00:00:00.0'), '518', 683, '-320'),
> (CAST(734.839647174 AS DOUBLE), true, TIMESTAMP('1995-06-01 00:00:00.0'), 
> CAST(-792 AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS INT), 
> TIMESTAMP('2021-07-11 00:00:00.0'), '-318', 564, '142')
>   ) as t);
> CREATE TEMPORARY VIEW table_3(string_col_1, float_col_2, timestamp_col_3, 
> boolean_col_4, timestamp_col_5, decimal3317_col_6) AS (
>   SELECT * FROM (VALUES
> ('88', CAST(191.92508 AS FLOAT), TIMESTAMP('1990-10-25 00:00:00.0'), 
> false, TIMESTAMP('1992-11-02 00:00:00.0'), CAST(NULL AS DECIMAL(33,17))),
> ('-419', CAST(-13.477915 AS FLOAT), TIMESTAMP('1996-03-02 00:00:00.0'), 
> true, CAST(NULL AS TIMESTAMP), -653.51000BD),
> ('970', CAST(-360.432 AS FLOAT), TIMESTAMP('2010-07-29 00:00:00.0'), 
> false, TIMESTAMP('1995-09-01 00:00:00.0'), -936.48000BD),
> ('807', CAST(814.30756 AS FLOAT), TIMESTAMP('2019-11-06 00:00:00.0'), 
> false, TIMESTAMP('1996-04-25 00:00:00.0'), 335.56000BD),
> ('-872', CAST(616.50525 AS FLOAT), TIMESTAMP('2011-08-28 00:00:00.0'), 
> false, TIMESTAMP('2003-07-19 00:00:00.0'), -951.18000BD),
> ('-167', CAST(-875.35675 AS FLOAT), TIMESTAMP('1995-07-14 00:00:00.0'), 
> false, TIMESTAMP('2005-11-29 00:00:00.0'), 224.89000BD)
>   ) as t);
> SELECT
> CAST(MIN(t2.smallint_col_4) AS STRING) AS char_col,
> LEAD(MAX((-387) + (727.64)), 90) OVER (PARTITION BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) ORDER BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) DESC, CAST(MIN(t2.smallint_col_4) AS 
> STRING)) AS decimal_col,
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9) AS int_col
> FROM table_3 t1
> INNER JOIN table_1 t2 ON (((t2.timestamp_col_3) = (t1.timestamp_col_5)) AND 
> ((t2.string_col_10) = (t1.string_col_1))) AND ((t2.string_col_10) = 
> (t1.string_col_1))
> WHERE
> (t2.smallint_col_4) IN (t2.int_col_9, t2.int_col_9)
> GROUP BY
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9);
> {code}
> Here's the OOM:
> {code}
> org.apache.hive.service.cli.HiveSQLException: 
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in 
> stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 
> (TID 9, localhost): java.lang.OutOfMemoryError: Unable to acquire 262144 
> bytes of memory, got 0
> at 
> org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:100)
> at 
> 

[jira] [Commented] (SPARK-17405) Simple aggregation query OOMing after SPARK-16525

2016-09-06 Thread Josh Rosen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468737#comment-15468737
 ] 

Josh Rosen commented on SPARK-17405:


On the Spark Dev list, [~jlaskowski] found a simpler example which triggers 
this issue:

{quote}
{code}
scala> val intsMM = 1 to math.pow(10, 3).toInt
intsMM: scala.collection.immutable.Range.Inclusive = Range(1, 2, 3, 4,
5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57,
58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74,
75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106,
107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120,
121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134,
135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148,
149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162,
163, 164, 165, 166, 167, 168, 169, 1...
scala> val df = intsMM.toDF("n").withColumn("m", 'n % 2)
df: org.apache.spark.sql.DataFrame = [n: int, m: int]

scala> df.groupBy('m).agg(sum('n)).show
...
16/09/06 22:28:02 ERROR Executor: Exception in task 6.0 in stage 0.0 (TID 6)
java.lang.OutOfMemoryError: Unable to acquire 262144 bytes of memory, got 0
...
{code}

Please see 
https://gist.github.com/jaceklaskowski/906d62b830f6c967a7eee5f8eb6e9237
{quote}

> Simple aggregation query OOMing after SPARK-16525
> -
>
> Key: SPARK-17405
> URL: https://issues.apache.org/jira/browse/SPARK-17405
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Josh Rosen
>Priority: Blocker
>
> Prior to SPARK-16525 / https://github.com/apache/spark/pull/14176, the 
> following query ran fine via Beeline / Thrift Server and the Spark shell, but 
> after that patch it is consistently OOMING:
> {code}
> CREATE TEMPORARY VIEW table_1(double_col_1, boolean_col_2, timestamp_col_3, 
> smallint_col_4, boolean_col_5, int_col_6, timestamp_col_7, varchar0008_col_8, 
> int_col_9, string_col_10) AS (
>   SELECT * FROM (VALUES
> (CAST(-147.818640624 AS DOUBLE), CAST(NULL AS BOOLEAN), 
> TIMESTAMP('2012-10-19 00:00:00.0'), CAST(9 AS SMALLINT), false, 77, 
> TIMESTAMP('2014-07-01 00:00:00.0'), '-945', -646, '722'),
> (CAST(594.195125271 AS DOUBLE), false, TIMESTAMP('2016-12-04 
> 00:00:00.0'), CAST(NULL AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS 
> INT), TIMESTAMP('1999-12-26 00:00:00.0'), '250', -861, '55'),
> (CAST(-454.171126363 AS DOUBLE), false, TIMESTAMP('2008-12-13 
> 00:00:00.0'), CAST(NULL AS SMALLINT), false, -783, TIMESTAMP('2010-05-28 
> 00:00:00.0'), '211', -959, CAST(NULL AS STRING)),
> (CAST(437.670945524 AS DOUBLE), true, TIMESTAMP('2011-10-16 00:00:00.0'), 
> CAST(952 AS SMALLINT), true, 297, TIMESTAMP('2013-01-13 00:00:00.0'), '262', 
> CAST(NULL AS INT), '936'),
> (CAST(-387.226759334 AS DOUBLE), false, TIMESTAMP('2019-10-03 
> 00:00:00.0'), CAST(-496 AS SMALLINT), CAST(NULL AS BOOLEAN), -925, 
> TIMESTAMP('2028-06-27 00:00:00.0'), '-657', 948, '18'),
> (CAST(-306.138230875 AS DOUBLE), true, TIMESTAMP('1997-10-07 
> 00:00:00.0'), CAST(332 AS SMALLINT), false, 744, TIMESTAMP('1990-09-22 
> 00:00:00.0'), '-345', 566, '-574'),
> (CAST(675.402140308 AS DOUBLE), false, TIMESTAMP('2017-06-26 
> 00:00:00.0'), CAST(972 AS SMALLINT), true, CAST(NULL AS INT), 
> TIMESTAMP('2026-06-10 00:00:00.0'), '518', 683, '-320'),
> (CAST(734.839647174 AS DOUBLE), true, TIMESTAMP('1995-06-01 00:00:00.0'), 
> CAST(-792 AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS INT), 
> TIMESTAMP('2021-07-11 00:00:00.0'), '-318', 564, '142')
>   ) as t);
> CREATE TEMPORARY VIEW table_3(string_col_1, float_col_2, timestamp_col_3, 
> boolean_col_4, timestamp_col_5, decimal3317_col_6) AS (
>   SELECT * FROM (VALUES
> ('88', CAST(191.92508 AS FLOAT), TIMESTAMP('1990-10-25 00:00:00.0'), 
> false, TIMESTAMP('1992-11-02 00:00:00.0'), CAST(NULL AS DECIMAL(33,17))),
> ('-419', CAST(-13.477915 AS FLOAT), TIMESTAMP('1996-03-02 00:00:00.0'), 
> true, CAST(NULL AS TIMESTAMP), -653.51000BD),
> ('970', CAST(-360.432 AS FLOAT), TIMESTAMP('2010-07-29 00:00:00.0'), 
> false, TIMESTAMP('1995-09-01 00:00:00.0'), -936.48000BD),
> ('807', CAST(814.30756 AS FLOAT), TIMESTAMP('2019-11-06 00:00:00.0'), 
> false, TIMESTAMP('1996-04-25 00:00:00.0'), 335.56000BD),
> ('-872', CAST(616.50525 AS FLOAT), TIMESTAMP('2011-08-28 00:00:00.0'), 
> false, TIMESTAMP('2003-07-19 00:00:00.0'), -951.18000BD),
> ('-167', CAST(-875.35675 AS FLOAT), TIMESTAMP('1995-07-14 00:00:00.0'), 
> false, 

[jira] [Commented] (SPARK-17405) Simple aggregation query OOMing after SPARK-16525

2016-09-06 Thread Josh Rosen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467792#comment-15467792
 ] 

Josh Rosen commented on SPARK-17405:


[~qifan], I believe that you may be able to work around the UnresolvedException 
by checkout out Spark as of your commit 
(03d77af9ec4ce9a42affd6ab4381ae5bd3c79a5a) rather than using the current master.

I'm running this query through the Spark ThriftServer, started using the script 
in {{sbin}}, with default settings on my Macbook Pro. Perhaps the default 
resource requirements are too high for the amount of {{local\[*]|| task 
parallelism? In either case, I think we need to fix this so that the out of the 
box experience works correctly.

> Simple aggregation query OOMing after SPARK-16525
> -
>
> Key: SPARK-17405
> URL: https://issues.apache.org/jira/browse/SPARK-17405
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Josh Rosen
>Priority: Blocker
>
> Prior to SPARK-16525 / https://github.com/apache/spark/pull/14176, the 
> following query ran fine via Beeline / Thrift Server and the Spark shell, but 
> after that patch it is consistently OOMING:
> {code}
> CREATE TEMPORARY VIEW table_1(double_col_1, boolean_col_2, timestamp_col_3, 
> smallint_col_4, boolean_col_5, int_col_6, timestamp_col_7, varchar0008_col_8, 
> int_col_9, string_col_10) AS (
>   SELECT * FROM (VALUES
> (CAST(-147.818640624 AS DOUBLE), CAST(NULL AS BOOLEAN), 
> TIMESTAMP('2012-10-19 00:00:00.0'), CAST(9 AS SMALLINT), false, 77, 
> TIMESTAMP('2014-07-01 00:00:00.0'), '-945', -646, '722'),
> (CAST(594.195125271 AS DOUBLE), false, TIMESTAMP('2016-12-04 
> 00:00:00.0'), CAST(NULL AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS 
> INT), TIMESTAMP('1999-12-26 00:00:00.0'), '250', -861, '55'),
> (CAST(-454.171126363 AS DOUBLE), false, TIMESTAMP('2008-12-13 
> 00:00:00.0'), CAST(NULL AS SMALLINT), false, -783, TIMESTAMP('2010-05-28 
> 00:00:00.0'), '211', -959, CAST(NULL AS STRING)),
> (CAST(437.670945524 AS DOUBLE), true, TIMESTAMP('2011-10-16 00:00:00.0'), 
> CAST(952 AS SMALLINT), true, 297, TIMESTAMP('2013-01-13 00:00:00.0'), '262', 
> CAST(NULL AS INT), '936'),
> (CAST(-387.226759334 AS DOUBLE), false, TIMESTAMP('2019-10-03 
> 00:00:00.0'), CAST(-496 AS SMALLINT), CAST(NULL AS BOOLEAN), -925, 
> TIMESTAMP('2028-06-27 00:00:00.0'), '-657', 948, '18'),
> (CAST(-306.138230875 AS DOUBLE), true, TIMESTAMP('1997-10-07 
> 00:00:00.0'), CAST(332 AS SMALLINT), false, 744, TIMESTAMP('1990-09-22 
> 00:00:00.0'), '-345', 566, '-574'),
> (CAST(675.402140308 AS DOUBLE), false, TIMESTAMP('2017-06-26 
> 00:00:00.0'), CAST(972 AS SMALLINT), true, CAST(NULL AS INT), 
> TIMESTAMP('2026-06-10 00:00:00.0'), '518', 683, '-320'),
> (CAST(734.839647174 AS DOUBLE), true, TIMESTAMP('1995-06-01 00:00:00.0'), 
> CAST(-792 AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS INT), 
> TIMESTAMP('2021-07-11 00:00:00.0'), '-318', 564, '142')
>   ) as t);
> CREATE TEMPORARY VIEW table_3(string_col_1, float_col_2, timestamp_col_3, 
> boolean_col_4, timestamp_col_5, decimal3317_col_6) AS (
>   SELECT * FROM (VALUES
> ('88', CAST(191.92508 AS FLOAT), TIMESTAMP('1990-10-25 00:00:00.0'), 
> false, TIMESTAMP('1992-11-02 00:00:00.0'), CAST(NULL AS DECIMAL(33,17))),
> ('-419', CAST(-13.477915 AS FLOAT), TIMESTAMP('1996-03-02 00:00:00.0'), 
> true, CAST(NULL AS TIMESTAMP), -653.51000BD),
> ('970', CAST(-360.432 AS FLOAT), TIMESTAMP('2010-07-29 00:00:00.0'), 
> false, TIMESTAMP('1995-09-01 00:00:00.0'), -936.48000BD),
> ('807', CAST(814.30756 AS FLOAT), TIMESTAMP('2019-11-06 00:00:00.0'), 
> false, TIMESTAMP('1996-04-25 00:00:00.0'), 335.56000BD),
> ('-872', CAST(616.50525 AS FLOAT), TIMESTAMP('2011-08-28 00:00:00.0'), 
> false, TIMESTAMP('2003-07-19 00:00:00.0'), -951.18000BD),
> ('-167', CAST(-875.35675 AS FLOAT), TIMESTAMP('1995-07-14 00:00:00.0'), 
> false, TIMESTAMP('2005-11-29 00:00:00.0'), 224.89000BD)
>   ) as t);
> SELECT
> CAST(MIN(t2.smallint_col_4) AS STRING) AS char_col,
> LEAD(MAX((-387) + (727.64)), 90) OVER (PARTITION BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) ORDER BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) DESC, CAST(MIN(t2.smallint_col_4) AS 
> STRING)) AS decimal_col,
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9) AS int_col
> FROM table_3 t1
> INNER JOIN table_1 t2 ON (((t2.timestamp_col_3) = (t1.timestamp_col_5)) AND 
> ((t2.string_col_10) = (t1.string_col_1))) AND ((t2.string_col_10) = 
> (t1.string_col_1))
> WHERE
> (t2.smallint_col_4) IN (t2.int_col_9, t2.int_col_9)
> GROUP BY
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9);
> {code}
> Here's the OOM:
> {code}
> 

[jira] [Commented] (SPARK-17405) Simple aggregation query OOMing after SPARK-16525

2016-09-06 Thread Qifan Pu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-17405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15466676#comment-15466676
 ] 

Qifan Pu commented on SPARK-17405:
--

[~joshrosen] Thanks for reporting. I haven't been able to reproduce this 
because of a catalyst bug I have now `Error: 
org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
foldable on unresolved object, tree: 'TIMESTAMP(2012-10-19 00:00:00.0) 
(state=,code=0)`
I will look more into this.
How much memory is configured for this specific test? One thing is that we 
added memory management through MemoryConsumer for the generated hashmap, so it 
correctly accounts that part of memory usage and is more likely to throw OOM.

> Simple aggregation query OOMing after SPARK-16525
> -
>
> Key: SPARK-17405
> URL: https://issues.apache.org/jira/browse/SPARK-17405
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Josh Rosen
>Priority: Blocker
>
> Prior to SPARK-16525 / https://github.com/apache/spark/pull/14176, the 
> following query ran fine via Beeline / Thrift Server and the Spark shell, but 
> after that patch it is consistently OOMING:
> {code}
> CREATE TEMPORARY VIEW table_1(double_col_1, boolean_col_2, timestamp_col_3, 
> smallint_col_4, boolean_col_5, int_col_6, timestamp_col_7, varchar0008_col_8, 
> int_col_9, string_col_10) AS (
>   SELECT * FROM (VALUES
> (CAST(-147.818640624 AS DOUBLE), CAST(NULL AS BOOLEAN), 
> TIMESTAMP('2012-10-19 00:00:00.0'), CAST(9 AS SMALLINT), false, 77, 
> TIMESTAMP('2014-07-01 00:00:00.0'), '-945', -646, '722'),
> (CAST(594.195125271 AS DOUBLE), false, TIMESTAMP('2016-12-04 
> 00:00:00.0'), CAST(NULL AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS 
> INT), TIMESTAMP('1999-12-26 00:00:00.0'), '250', -861, '55'),
> (CAST(-454.171126363 AS DOUBLE), false, TIMESTAMP('2008-12-13 
> 00:00:00.0'), CAST(NULL AS SMALLINT), false, -783, TIMESTAMP('2010-05-28 
> 00:00:00.0'), '211', -959, CAST(NULL AS STRING)),
> (CAST(437.670945524 AS DOUBLE), true, TIMESTAMP('2011-10-16 00:00:00.0'), 
> CAST(952 AS SMALLINT), true, 297, TIMESTAMP('2013-01-13 00:00:00.0'), '262', 
> CAST(NULL AS INT), '936'),
> (CAST(-387.226759334 AS DOUBLE), false, TIMESTAMP('2019-10-03 
> 00:00:00.0'), CAST(-496 AS SMALLINT), CAST(NULL AS BOOLEAN), -925, 
> TIMESTAMP('2028-06-27 00:00:00.0'), '-657', 948, '18'),
> (CAST(-306.138230875 AS DOUBLE), true, TIMESTAMP('1997-10-07 
> 00:00:00.0'), CAST(332 AS SMALLINT), false, 744, TIMESTAMP('1990-09-22 
> 00:00:00.0'), '-345', 566, '-574'),
> (CAST(675.402140308 AS DOUBLE), false, TIMESTAMP('2017-06-26 
> 00:00:00.0'), CAST(972 AS SMALLINT), true, CAST(NULL AS INT), 
> TIMESTAMP('2026-06-10 00:00:00.0'), '518', 683, '-320'),
> (CAST(734.839647174 AS DOUBLE), true, TIMESTAMP('1995-06-01 00:00:00.0'), 
> CAST(-792 AS SMALLINT), CAST(NULL AS BOOLEAN), CAST(NULL AS INT), 
> TIMESTAMP('2021-07-11 00:00:00.0'), '-318', 564, '142')
>   ) as t);
> CREATE TEMPORARY VIEW table_3(string_col_1, float_col_2, timestamp_col_3, 
> boolean_col_4, timestamp_col_5, decimal3317_col_6) AS (
>   SELECT * FROM (VALUES
> ('88', CAST(191.92508 AS FLOAT), TIMESTAMP('1990-10-25 00:00:00.0'), 
> false, TIMESTAMP('1992-11-02 00:00:00.0'), CAST(NULL AS DECIMAL(33,17))),
> ('-419', CAST(-13.477915 AS FLOAT), TIMESTAMP('1996-03-02 00:00:00.0'), 
> true, CAST(NULL AS TIMESTAMP), -653.51000BD),
> ('970', CAST(-360.432 AS FLOAT), TIMESTAMP('2010-07-29 00:00:00.0'), 
> false, TIMESTAMP('1995-09-01 00:00:00.0'), -936.48000BD),
> ('807', CAST(814.30756 AS FLOAT), TIMESTAMP('2019-11-06 00:00:00.0'), 
> false, TIMESTAMP('1996-04-25 00:00:00.0'), 335.56000BD),
> ('-872', CAST(616.50525 AS FLOAT), TIMESTAMP('2011-08-28 00:00:00.0'), 
> false, TIMESTAMP('2003-07-19 00:00:00.0'), -951.18000BD),
> ('-167', CAST(-875.35675 AS FLOAT), TIMESTAMP('1995-07-14 00:00:00.0'), 
> false, TIMESTAMP('2005-11-29 00:00:00.0'), 224.89000BD)
>   ) as t);
> SELECT
> CAST(MIN(t2.smallint_col_4) AS STRING) AS char_col,
> LEAD(MAX((-387) + (727.64)), 90) OVER (PARTITION BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) ORDER BY COALESCE(t2.int_col_9, 
> t2.smallint_col_4, t2.int_col_9) DESC, CAST(MIN(t2.smallint_col_4) AS 
> STRING)) AS decimal_col,
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9) AS int_col
> FROM table_3 t1
> INNER JOIN table_1 t2 ON (((t2.timestamp_col_3) = (t1.timestamp_col_5)) AND 
> ((t2.string_col_10) = (t1.string_col_1))) AND ((t2.string_col_10) = 
> (t1.string_col_1))
> WHERE
> (t2.smallint_col_4) IN (t2.int_col_9, t2.int_col_9)
> GROUP BY
> COALESCE(t2.int_col_9, t2.smallint_col_4, t2.int_col_9);
> {code}
> Here's the OOM:
> {code}
>