[ 
https://issues.apache.org/jira/browse/SPARK-19372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16025927#comment-16025927
 ] 

Apache Spark commented on SPARK-19372:
--------------------------------------

User 'kiszk' has created a pull request for this issue:
https://github.com/apache/spark/pull/18119

> Code generation for Filter predicate including many OR conditions exceeds JVM 
> method size limit 
> ------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-19372
>                 URL: https://issues.apache.org/jira/browse/SPARK-19372
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 2.1.0
>            Reporter: Jay Pranavamurthi
>            Assignee: Kazuaki Ishizaki
>             Fix For: 2.3.0
>
>         Attachments: wide400cols.csv
>
>
> For the attached csv file, the code below causes the exception 
> "org.codehaus.janino.JaninoRuntimeException: Code of method 
> "(Lorg/apache/spark/sql/catalyst/InternalRow;)Z" of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificPredicate" 
> grows beyond 64 KB
> Code:
> {code:borderStyle=solid}
>   val conf = new SparkConf().setMaster("local[1]")
>   val sqlContext = 
> SparkSession.builder().config(conf).getOrCreate().sqlContext
>   val dataframe =
>     sqlContext
>       .read
>       .format("com.databricks.spark.csv")
>       .load("wide400cols.csv")
>   val filter = (0 to 399)
>     .foldLeft(lit(false))((e, index) => 
> e.or(dataframe.col(dataframe.columns(index)) =!= s"column${index+1}"))
>   val filtered = dataframe.filter(filter)
>   filtered.show(100)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to