[jira] [Commented] (SPARK-18207) class "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection" grows beyond 64 KB

2016-11-12 Thread Don Drake (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15659978#comment-15659978
 ] 

Don Drake commented on SPARK-18207:
---

Hi, I was able to download a nightly SNAPSHOT release and verify that this 
resolves the issue for my project.  Thanks to everyone who contributed to this 
fix and getting it merged in a timely manner.

> class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> 
>
> Key: SPARK-18207
> URL: https://issues.apache.org/jira/browse/SPARK-18207
> Project: Spark
>  Issue Type: Bug
>  Components: Optimizer, SQL
>Affects Versions: 2.0.1, 2.1.0
>Reporter: Don Drake
>Assignee: Kazuaki Ishizaki
> Fix For: 2.1.0
>
> Attachments: spark-18207.txt
>
>
> I have 2 wide dataframes that contain nested data structures, when I explode 
> one of the dataframes, it doesn't include records with an empty nested 
> structure (outer explode not supported).  So, I create a similar dataframe 
> with null values and union them together.  See SPARK-13721 for more details 
> as to why I have to do this.
> I was hoping that SPARK-16845 was going to address my issue, but it does not. 
>  I was asked by [~lwlin] to open this JIRA.  
> I will attach a code snippet that can be pasted into spark-shell that 
> duplicates my code and the exception.  This worked just fine in Spark 1.6.x.
> {code}
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 35 in 
> stage 5.0 failed 4 times, most recent failure: Lost task 35.3 in stage 5.0 
> (TID 812, somehost.mydomain.com, executor 8): 
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> "apply(Lorg/apache/spark/sql/catalyst/InternalRow;)Lorg/apache/spark/sql/catalyst/expressions/UnsafeRow;"
>  of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18207) class "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection" grows beyond 64 KB

2016-11-02 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15631321#comment-15631321
 ] 

Apache Spark commented on SPARK-18207:
--

User 'kiszk' has created a pull request for this issue:
https://github.com/apache/spark/pull/15745

> class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> 
>
> Key: SPARK-18207
> URL: https://issues.apache.org/jira/browse/SPARK-18207
> Project: Spark
>  Issue Type: Bug
>  Components: Optimizer, SQL
>Affects Versions: 2.0.1, 2.1.0
>Reporter: Don Drake
> Attachments: spark-18207.txt
>
>
> I have 2 wide dataframes that contain nested data structures, when I explode 
> one of the dataframes, it doesn't include records with an empty nested 
> structure (outer explode not supported).  So, I create a similar dataframe 
> with null values and union them together.  See SPARK-13721 for more details 
> as to why I have to do this.
> I was hoping that SPARK-16845 was going to address my issue, but it does not. 
>  I was asked by [~lwlin] to open this JIRA.  
> I will attach a code snippet that can be pasted into spark-shell that 
> duplicates my code and the exception.  This worked just fine in Spark 1.6.x.
> {code}
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 35 in 
> stage 5.0 failed 4 times, most recent failure: Lost task 35.3 in stage 5.0 
> (TID 812, somehost.mydomain.com, executor 8): 
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> "apply(Lorg/apache/spark/sql/catalyst/InternalRow;)Lorg/apache/spark/sql/catalyst/expressions/UnsafeRow;"
>  of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18207) class "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection" grows beyond 64 KB

2016-11-02 Thread Kazuaki Ishizaki (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629522#comment-15629522
 ] 

Kazuaki Ishizaki commented on SPARK-18207:
--

I created a smaller program to reproduce this problem, and understand why this 
error occurs.
This program tries to calculate a hash value for a row that includes 1000 
String fields. During a code generation, {{HashExpression.doGenCode}} generates 
a lot of Java statements to calculate a hash value for the row. The generated 
code exceeds 64KB. 

I have just created a fix. Can I use this JIRA entry to submit a PR? Or, should 
I use SPARK-16845?

> class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> 
>
> Key: SPARK-18207
> URL: https://issues.apache.org/jira/browse/SPARK-18207
> Project: Spark
>  Issue Type: Bug
>  Components: Optimizer, SQL
>Affects Versions: 2.0.1, 2.1.0
>Reporter: Don Drake
> Attachments: spark-18207.txt
>
>
> I have 2 wide dataframes that contain nested data structures, when I explode 
> one of the dataframes, it doesn't include records with an empty nested 
> structure (outer explode not supported).  So, I create a similar dataframe 
> with null values and union them together.  See SPARK-13721 for more details 
> as to why I have to do this.
> I was hoping that SPARK-16845 was going to address my issue, but it does not. 
>  I was asked by [~lwlin] to open this JIRA.  
> I will attach a code snippet that can be pasted into spark-shell that 
> duplicates my code and the exception.  This worked just fine in Spark 1.6.x.
> {code}
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 35 in 
> stage 5.0 failed 4 times, most recent failure: Lost task 35.3 in stage 5.0 
> (TID 812, somehost.mydomain.com, executor 8): 
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> "apply(Lorg/apache/spark/sql/catalyst/InternalRow;)Lorg/apache/spark/sql/catalyst/expressions/UnsafeRow;"
>  of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18207) class "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection" grows beyond 64 KB

2016-11-02 Thread Don Drake (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629177#comment-15629177
 ] 

Don Drake commented on SPARK-18207:
---

The difference with my case versus the other test cases is that my scenario 
involves a wide dataframe (800+ columns) that also have multiple nested 
structures (arrays of classes) involved in a SQL query (union).  

I have verified that [~lwlin]'s fix does not work for my case, but it does work 
for wide dataframes without nested structures.

I agree it's similar to the others, but more complicated to reproduce.


> class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> 
>
> Key: SPARK-18207
> URL: https://issues.apache.org/jira/browse/SPARK-18207
> Project: Spark
>  Issue Type: Bug
>  Components: Optimizer, SQL
>Affects Versions: 2.0.1, 2.1.0
>Reporter: Don Drake
> Attachments: spark-18207.txt
>
>
> I have 2 wide dataframes that contain nested data structures, when I explode 
> one of the dataframes, it doesn't include records with an empty nested 
> structure (outer explode not supported).  So, I create a similar dataframe 
> with null values and union them together.  See SPARK-13721 for more details 
> as to why I have to do this.
> I was hoping that SPARK-16845 was going to address my issue, but it does not. 
>  I was asked by [~lwlin] to open this JIRA.  
> I will attach a code snippet that can be pasted into spark-shell that 
> duplicates my code and the exception.  This worked just fine in Spark 1.6.x.
> {code}
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 35 in 
> stage 5.0 failed 4 times, most recent failure: Lost task 35.3 in stage 5.0 
> (TID 812, somehost.mydomain.com, executor 8): 
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> "apply(Lorg/apache/spark/sql/catalyst/InternalRow;)Lorg/apache/spark/sql/catalyst/expressions/UnsafeRow;"
>  of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18207) class "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection" grows beyond 64 KB

2016-11-02 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629154#comment-15629154
 ] 

Sean Owen commented on SPARK-18207:
---

Can you note the difference here? if he's just saying that one fix X doesn't 
fix the issue I don't know if that means it's a different issue.

> class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> 
>
> Key: SPARK-18207
> URL: https://issues.apache.org/jira/browse/SPARK-18207
> Project: Spark
>  Issue Type: Bug
>  Components: Optimizer, SQL
>Affects Versions: 2.0.1, 2.1.0
>Reporter: Don Drake
> Attachments: spark-18207.txt
>
>
> I have 2 wide dataframes that contain nested data structures, when I explode 
> one of the dataframes, it doesn't include records with an empty nested 
> structure (outer explode not supported).  So, I create a similar dataframe 
> with null values and union them together.  See SPARK-13721 for more details 
> as to why I have to do this.
> I was hoping that SPARK-16845 was going to address my issue, but it does not. 
>  I was asked by [~lwlin] to open this JIRA.  
> I will attach a code snippet that can be pasted into spark-shell that 
> duplicates my code and the exception.  This worked just fine in Spark 1.6.x.
> {code}
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 35 in 
> stage 5.0 failed 4 times, most recent failure: Lost task 35.3 in stage 5.0 
> (TID 812, somehost.mydomain.com, executor 8): 
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> "apply(Lorg/apache/spark/sql/catalyst/InternalRow;)Lorg/apache/spark/sql/catalyst/expressions/UnsafeRow;"
>  of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18207) class "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection" grows beyond 64 KB

2016-11-02 Thread Don Drake (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15629140#comment-15629140
 ] 

Don Drake commented on SPARK-18207:
---

I opened it based on [~lwlin]'s suggestion in the comments of SPARK-16845.  

> class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> 
>
> Key: SPARK-18207
> URL: https://issues.apache.org/jira/browse/SPARK-18207
> Project: Spark
>  Issue Type: Bug
>  Components: Optimizer, SQL
>Affects Versions: 2.0.1, 2.1.0
>Reporter: Don Drake
> Attachments: spark-18207.txt
>
>
> I have 2 wide dataframes that contain nested data structures, when I explode 
> one of the dataframes, it doesn't include records with an empty nested 
> structure (outer explode not supported).  So, I create a similar dataframe 
> with null values and union them together.  See SPARK-13721 for more details 
> as to why I have to do this.
> I was hoping that SPARK-16845 was going to address my issue, but it does not. 
>  I was asked by [~lwlin] to open this JIRA.  
> I will attach a code snippet that can be pasted into spark-shell that 
> duplicates my code and the exception.  This worked just fine in Spark 1.6.x.
> {code}
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 35 in 
> stage 5.0 failed 4 times, most recent failure: Lost task 35.3 in stage 5.0 
> (TID 812, somehost.mydomain.com, executor 8): 
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> "apply(Lorg/apache/spark/sql/catalyst/InternalRow;)Lorg/apache/spark/sql/catalyst/expressions/UnsafeRow;"
>  of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18207) class "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection" grows beyond 64 KB

2016-11-02 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15628338#comment-15628338
 ] 

Sean Owen commented on SPARK-18207:
---

SPARK-16845 is unresolved and this seems to be exactly the same issue. Why open 
another jIRA?

> class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> 
>
> Key: SPARK-18207
> URL: https://issues.apache.org/jira/browse/SPARK-18207
> Project: Spark
>  Issue Type: Bug
>  Components: Optimizer, SQL
>Affects Versions: 2.0.1, 2.1.0
>Reporter: Don Drake
> Attachments: spark-18207.txt
>
>
> I have 2 wide dataframes that contain nested data structures, when I explode 
> one of the dataframes, it doesn't include records with an empty nested 
> structure (outer explode not supported).  So, I create a similar dataframe 
> with null values and union them together.  See SPARK-13721 for more details 
> as to why I have to do this.
> I was hoping that SPARK-16845 was going to address my issue, but it does not. 
>  I was asked by [~lwlin] to open this JIRA.  
> I will attach a code snippet that can be pasted into spark-shell that 
> duplicates my code and the exception.  This worked just fine in Spark 1.6.x.
> {code}
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 35 in 
> stage 5.0 failed 4 times, most recent failure: Lost task 35.3 in stage 5.0 
> (TID 812, somehost.mydomain.com, executor 8): 
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> "apply(Lorg/apache/spark/sql/catalyst/InternalRow;)Lorg/apache/spark/sql/catalyst/expressions/UnsafeRow;"
>  of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org