[jira] [Commented] (SPARK-37609) Transient StackOverflowError on DataFrame from Catalyst QueryPlan

2022-08-29 Thread Rafal Wojdyla (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17597026#comment-17597026
 ] 

Rafal Wojdyla commented on SPARK-37609:
---

Experienced another issue like this, this time the query is fairly simple but 
there's a couple of thousands of columns in the DataFrame.

> Transient StackOverflowError on DataFrame from Catalyst QueryPlan
> -
>
> Key: SPARK-37609
> URL: https://issues.apache.org/jira/browse/SPARK-37609
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.1.2
> Environment: py:3.9
>Reporter: Rafal Wojdyla
>Priority: Major
>
> I sporadically observe a StackOverflowError from Catalyst's QueryPlan (for a 
> relatively complicated query), below is a stacktrace from the {{count}} on 
> that DF.  It's a bit troubling because it's a transient error, with enough 
> retries (no change to code, probably some kind of cache?), I can get the op 
> to work :(
> {noformat}
> ---
> Py4JJavaError Traceback (most recent call last)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/dataframe.py 
> in count(self)
> 662 2
> 663 """
> --> 664 return int(self._jdf.count())
> 665 
> 666 def collect(self):
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/java_gateway.py in 
> __call__(self, *args)
>1302 
>1303 answer = self.gateway_client.send_command(command)
> -> 1304 return_value = get_return_value(
>1305 answer, self.gateway_client, self.target_id, self.name)
>1306 
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/utils.py in 
> deco(*a, **kw)
> 109 def deco(*a, **kw):
> 110 try:
> --> 111 return f(*a, **kw)
> 112 except py4j.protocol.Py4JJavaError as e:
> 113 converted = convert_exception(e.java_exception)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/protocol.py in 
> get_return_value(answer, gateway_client, target_id, name)
> 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 325 if answer[1] == REFERENCE_TYPE:
> --> 326 raise Py4JJavaError(
> 327 "An error occurred while calling {0}{1}{2}.\n".
> 328 format(target_id, ".", name), value)
> Py4JJavaError: An error occurred while calling o9123.count.
> : java.lang.StackOverflowError
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:188)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37609) Transient StackOverflowError on DataFrame from Catalyst QueryPlan

2022-05-31 Thread angerszhu (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544321#comment-17544321
 ] 

angerszhu commented on SPARK-37609:
---

Increase -Xss  can resolve this. But we should better to refactor the current  
code...

> Transient StackOverflowError on DataFrame from Catalyst QueryPlan
> -
>
> Key: SPARK-37609
> URL: https://issues.apache.org/jira/browse/SPARK-37609
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.1.2
> Environment: py:3.9
>Reporter: Rafal Wojdyla
>Priority: Major
>
> I sporadically observe a StackOverflowError from Catalyst's QueryPlan (for a 
> relatively complicated query), below is a stacktrace from the {{count}} on 
> that DF.  It's a bit troubling because it's a transient error, with enough 
> retries (no change to code, probably some kind of cache?), I can get the op 
> to work :(
> {noformat}
> ---
> Py4JJavaError Traceback (most recent call last)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/dataframe.py 
> in count(self)
> 662 2
> 663 """
> --> 664 return int(self._jdf.count())
> 665 
> 666 def collect(self):
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/java_gateway.py in 
> __call__(self, *args)
>1302 
>1303 answer = self.gateway_client.send_command(command)
> -> 1304 return_value = get_return_value(
>1305 answer, self.gateway_client, self.target_id, self.name)
>1306 
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/utils.py in 
> deco(*a, **kw)
> 109 def deco(*a, **kw):
> 110 try:
> --> 111 return f(*a, **kw)
> 112 except py4j.protocol.Py4JJavaError as e:
> 113 converted = convert_exception(e.java_exception)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/protocol.py in 
> get_return_value(answer, gateway_client, target_id, name)
> 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 325 if answer[1] == REFERENCE_TYPE:
> --> 326 raise Py4JJavaError(
> 327 "An error occurred while calling {0}{1}{2}.\n".
> 328 format(target_id, ".", name), value)
> Py4JJavaError: An error occurred while calling o9123.count.
> : java.lang.StackOverflowError
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:188)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37609) Transient StackOverflowError on DataFrame from Catalyst QueryPlan

2022-05-31 Thread angerszhu (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544281#comment-17544281
 ] 

angerszhu commented on SPARK-37609:
---

[~yumwang]Seems just an very complex table schema. Not reproduce every time. I 
am tell user to try  increase -Xss, to see if this way can resolve this probelm.

> Transient StackOverflowError on DataFrame from Catalyst QueryPlan
> -
>
> Key: SPARK-37609
> URL: https://issues.apache.org/jira/browse/SPARK-37609
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.1.2
> Environment: py:3.9
>Reporter: Rafal Wojdyla
>Priority: Major
>
> I sporadically observe a StackOverflowError from Catalyst's QueryPlan (for a 
> relatively complicated query), below is a stacktrace from the {{count}} on 
> that DF.  It's a bit troubling because it's a transient error, with enough 
> retries (no change to code, probably some kind of cache?), I can get the op 
> to work :(
> {noformat}
> ---
> Py4JJavaError Traceback (most recent call last)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/dataframe.py 
> in count(self)
> 662 2
> 663 """
> --> 664 return int(self._jdf.count())
> 665 
> 666 def collect(self):
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/java_gateway.py in 
> __call__(self, *args)
>1302 
>1303 answer = self.gateway_client.send_command(command)
> -> 1304 return_value = get_return_value(
>1305 answer, self.gateway_client, self.target_id, self.name)
>1306 
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/utils.py in 
> deco(*a, **kw)
> 109 def deco(*a, **kw):
> 110 try:
> --> 111 return f(*a, **kw)
> 112 except py4j.protocol.Py4JJavaError as e:
> 113 converted = convert_exception(e.java_exception)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/protocol.py in 
> get_return_value(answer, gateway_client, target_id, name)
> 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 325 if answer[1] == REFERENCE_TYPE:
> --> 326 raise Py4JJavaError(
> 327 "An error occurred while calling {0}{1}{2}.\n".
> 328 format(target_id, ".", name), value)
> Py4JJavaError: An error occurred while calling o9123.count.
> : java.lang.StackOverflowError
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:188)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37609) Transient StackOverflowError on DataFrame from Catalyst QueryPlan

2022-05-31 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544204#comment-17544204
 ] 

Yuming Wang commented on SPARK-37609:
-

[~angerszhuuu] How to reproduce this issue?

> Transient StackOverflowError on DataFrame from Catalyst QueryPlan
> -
>
> Key: SPARK-37609
> URL: https://issues.apache.org/jira/browse/SPARK-37609
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.1.2
> Environment: py:3.9
>Reporter: Rafal Wojdyla
>Priority: Major
>
> I sporadically observe a StackOverflowError from Catalyst's QueryPlan (for a 
> relatively complicated query), below is a stacktrace from the {{count}} on 
> that DF.  It's a bit troubling because it's a transient error, with enough 
> retries (no change to code, probably some kind of cache?), I can get the op 
> to work :(
> {noformat}
> ---
> Py4JJavaError Traceback (most recent call last)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/dataframe.py 
> in count(self)
> 662 2
> 663 """
> --> 664 return int(self._jdf.count())
> 665 
> 666 def collect(self):
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/java_gateway.py in 
> __call__(self, *args)
>1302 
>1303 answer = self.gateway_client.send_command(command)
> -> 1304 return_value = get_return_value(
>1305 answer, self.gateway_client, self.target_id, self.name)
>1306 
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/utils.py in 
> deco(*a, **kw)
> 109 def deco(*a, **kw):
> 110 try:
> --> 111 return f(*a, **kw)
> 112 except py4j.protocol.Py4JJavaError as e:
> 113 converted = convert_exception(e.java_exception)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/protocol.py in 
> get_return_value(answer, gateway_client, target_id, name)
> 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 325 if answer[1] == REFERENCE_TYPE:
> --> 326 raise Py4JJavaError(
> 327 "An error occurred while calling {0}{1}{2}.\n".
> 328 format(target_id, ".", name), value)
> Py4JJavaError: An error occurred while calling o9123.count.
> : java.lang.StackOverflowError
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:188)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37609) Transient StackOverflowError on DataFrame from Catalyst QueryPlan

2022-05-31 Thread angerszhu (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544192#comment-17544192
 ] 

angerszhu commented on SPARK-37609:
---

Same error in spark-3.1

{code:java}
22/05/26 15:26:48 ERROR ApplicationMaster: User class threw exception: 
java.lang.StackOverflowError
java.lang.StackOverflowError
at scala.collection.TraversableOnce.nonEmpty(TraversableOnce.scala:114)
at scala.collection.TraversableOnce.nonEmpty$(TraversableOnce.scala:114)
at scala.collection.AbstractTraversable.nonEmpty(Traversable.scala:108)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:358)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
at 
org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
at 

[jira] [Commented] (SPARK-37609) Transient StackOverflowError on DataFrame from Catalyst QueryPlan

2021-12-13 Thread Rafal Wojdyla (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17458691#comment-17458691
 ] 

Rafal Wojdyla commented on SPARK-37609:
---

[~hyukjin.kwon] yep, understand that, if I have some time to do this, I will 
try to figure out and post a reproduction code.

> Transient StackOverflowError on DataFrame from Catalyst QueryPlan
> -
>
> Key: SPARK-37609
> URL: https://issues.apache.org/jira/browse/SPARK-37609
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.1.2
> Environment: py:3.9
>Reporter: Rafal Wojdyla
>Priority: Major
>
> I sporadically observe a StackOverflowError from Catalyst's QueryPlan (for a 
> relatively complicated query), below is a stacktrace from the {{count}} on 
> that DF.  It's a bit troubling because it's a transient error, with enough 
> retries (no change to code, probably some kind of cache?), I can get the op 
> to work :(
> {noformat}
> ---
> Py4JJavaError Traceback (most recent call last)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/dataframe.py 
> in count(self)
> 662 2
> 663 """
> --> 664 return int(self._jdf.count())
> 665 
> 666 def collect(self):
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/java_gateway.py in 
> __call__(self, *args)
>1302 
>1303 answer = self.gateway_client.send_command(command)
> -> 1304 return_value = get_return_value(
>1305 answer, self.gateway_client, self.target_id, self.name)
>1306 
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/utils.py in 
> deco(*a, **kw)
> 109 def deco(*a, **kw):
> 110 try:
> --> 111 return f(*a, **kw)
> 112 except py4j.protocol.Py4JJavaError as e:
> 113 converted = convert_exception(e.java_exception)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/protocol.py in 
> get_return_value(answer, gateway_client, target_id, name)
> 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 325 if answer[1] == REFERENCE_TYPE:
> --> 326 raise Py4JJavaError(
> 327 "An error occurred while calling {0}{1}{2}.\n".
> 328 format(target_id, ".", name), value)
> Py4JJavaError: An error occurred while calling o9123.count.
> : java.lang.StackOverflowError
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:188)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37609) Transient StackOverflowError on DataFrame from Catalyst QueryPlan

2021-12-12 Thread Hyukjin Kwon (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17458103#comment-17458103
 ] 

Hyukjin Kwon commented on SPARK-37609:
--

[~ravwojdyla] I don;t think people will dare to reproduce and debug for further 
investigation. it would be great to have minimised self-contained reproducer 
here.

> Transient StackOverflowError on DataFrame from Catalyst QueryPlan
> -
>
> Key: SPARK-37609
> URL: https://issues.apache.org/jira/browse/SPARK-37609
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.1.2
> Environment: py:3.9
>Reporter: Rafal Wojdyla
>Priority: Major
>
> I sporadically observe a StackOverflowError from Catalyst's QueryPlan (for a 
> relatively complicated query), below is a stacktrace from the {{count}} on 
> that DF.  It's a bit troubling because it's a transient error, with enough 
> retries (no change to code, probably some kind of cache?), I can get the op 
> to work :(
> {noformat}
> ---
> Py4JJavaError Traceback (most recent call last)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/dataframe.py 
> in count(self)
> 662 2
> 663 """
> --> 664 return int(self._jdf.count())
> 665 
> 666 def collect(self):
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/java_gateway.py in 
> __call__(self, *args)
>1302 
>1303 answer = self.gateway_client.send_command(command)
> -> 1304 return_value = get_return_value(
>1305 answer, self.gateway_client, self.target_id, self.name)
>1306 
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/utils.py in 
> deco(*a, **kw)
> 109 def deco(*a, **kw):
> 110 try:
> --> 111 return f(*a, **kw)
> 112 except py4j.protocol.Py4JJavaError as e:
> 113 converted = convert_exception(e.java_exception)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/protocol.py in 
> get_return_value(answer, gateway_client, target_id, name)
> 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 325 if answer[1] == REFERENCE_TYPE:
> --> 326 raise Py4JJavaError(
> 327 "An error occurred while calling {0}{1}{2}.\n".
> 328 format(target_id, ".", name), value)
> Py4JJavaError: An error occurred while calling o9123.count.
> : java.lang.StackOverflowError
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:188)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37609) Transient StackOverflowError on DataFrame from Catalyst QueryPlan

2021-12-10 Thread Rafal Wojdyla (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17457483#comment-17457483
 ] 

Rafal Wojdyla commented on SPARK-37609:
---

[~yumwang] I don't have a public code to share. Also don't have minimal 
reproduction code yet (don't have time to do it right now). The DF in this 
specific case had 162 columns, and I can't share the query plan without 
anonymising it (https://issues.apache.org/jira/browse/SPARK-37610). Anything 
else I could do in the meantime that would not require significant amount of 
work?

> Transient StackOverflowError on DataFrame from Catalyst QueryPlan
> -
>
> Key: SPARK-37609
> URL: https://issues.apache.org/jira/browse/SPARK-37609
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.1.2
> Environment: py:3.9
>Reporter: Rafal Wojdyla
>Priority: Major
>
> I sporadically observe a StackOverflowError from Catalyst's QueryPlan (for a 
> relatively complicated query), below is a stacktrace from the {{count}} on 
> that DF.  It's a bit troubling because it's a transient error, with enough 
> retries (no change to code, probably some kind of cache?), I can get the op 
> to work :(
> {noformat}
> ---
> Py4JJavaError Traceback (most recent call last)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/dataframe.py 
> in count(self)
> 662 2
> 663 """
> --> 664 return int(self._jdf.count())
> 665 
> 666 def collect(self):
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/java_gateway.py in 
> __call__(self, *args)
>1302 
>1303 answer = self.gateway_client.send_command(command)
> -> 1304 return_value = get_return_value(
>1305 answer, self.gateway_client, self.target_id, self.name)
>1306 
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/utils.py in 
> deco(*a, **kw)
> 109 def deco(*a, **kw):
> 110 try:
> --> 111 return f(*a, **kw)
> 112 except py4j.protocol.Py4JJavaError as e:
> 113 converted = convert_exception(e.java_exception)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/protocol.py in 
> get_return_value(answer, gateway_client, target_id, name)
> 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 325 if answer[1] == REFERENCE_TYPE:
> --> 326 raise Py4JJavaError(
> 327 "An error occurred while calling {0}{1}{2}.\n".
> 328 format(target_id, ".", name), value)
> Py4JJavaError: An error occurred while calling o9123.count.
> : java.lang.StackOverflowError
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:188)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37609) Transient StackOverflowError on DataFrame from Catalyst QueryPlan

2021-12-10 Thread Yuming Wang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17457481#comment-17457481
 ] 

Yuming Wang commented on SPARK-37609:
-

How to reproduce this issue?

> Transient StackOverflowError on DataFrame from Catalyst QueryPlan
> -
>
> Key: SPARK-37609
> URL: https://issues.apache.org/jira/browse/SPARK-37609
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.1.2
>Reporter: Rafal Wojdyla
>Priority: Major
>
> I sporadically observe a StackOverflowError from Catalyst's QueryPlan (for a 
> relatively complicated query), below is a stacktrace from the {{count}} on 
> that DF.  It's a bit troubling because it's a transient error, with enough 
> retries (no change to code, probably some kind of cache?), I can get the op 
> to work :(
> {noformat}
> ---
> Py4JJavaError Traceback (most recent call last)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/dataframe.py 
> in count(self)
> 662 2
> 663 """
> --> 664 return int(self._jdf.count())
> 665 
> 666 def collect(self):
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/java_gateway.py in 
> __call__(self, *args)
>1302 
>1303 answer = self.gateway_client.send_command(command)
> -> 1304 return_value = get_return_value(
>1305 answer, self.gateway_client, self.target_id, self.name)
>1306 
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/pyspark/sql/utils.py in 
> deco(*a, **kw)
> 109 def deco(*a, **kw):
> 110 try:
> --> 111 return f(*a, **kw)
> 112 except py4j.protocol.Py4JJavaError as e:
> 113 converted = convert_exception(e.java_exception)
> ~/miniconda3/envs/tr-dev/lib/python3.9/site-packages/py4j/protocol.py in 
> get_return_value(answer, gateway_client, target_id, name)
> 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
> 325 if answer[1] == REFERENCE_TYPE:
> --> 326 raise Py4JJavaError(
> 327 "An error occurred while calling {0}{1}{2}.\n".
> 328 format(target_id, ".", name), value)
> Py4JJavaError: An error occurred while calling o9123.count.
> : java.lang.StackOverflowError
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:188)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformUpWithNewOutput$1(QueryPlan.scala:193)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
>   at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.rewrite$1(QueryPlan.scala:192)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org