[jira] [Commented] (SPARK-16042) Eliminate nullcheck code at projection for an array type
[ https://issues.apache.org/jira/browse/SPARK-16042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851469#comment-15851469 ] Hyukjin Kwon commented on SPARK-16042: -- [~kiszk], would this JIRA maybe be resolvable per https://github.com/apache/spark/pull/13757#issuecomment-270453328? > Eliminate nullcheck code at projection for an array type > > > Key: SPARK-16042 > URL: https://issues.apache.org/jira/browse/SPARK-16042 > Project: Spark > Issue Type: Improvement > Components: SQL >Reporter: Kazuaki Ishizaki > > When we run a spark program with a projection for a array type, nullcheck at > a call to write each element of an array is generated. If we know all of the > elements do not have {{null}} at compilation time, we can eliminate code for > nullcheck. > {code} > val df = sparkContext.parallelize(Seq(1.0, 2.0), 1).toDF("v") > df.selectExpr("Array(v + 2.2, v + 3.3)").collect > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-16042) Eliminate nullcheck code at projection for an array type
[ https://issues.apache.org/jira/browse/SPARK-16042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337594#comment-15337594 ] Apache Spark commented on SPARK-16042: -- User 'kiszk' has created a pull request for this issue: https://github.com/apache/spark/pull/13757 > Eliminate nullcheck code at projection for an array type > > > Key: SPARK-16042 > URL: https://issues.apache.org/jira/browse/SPARK-16042 > Project: Spark > Issue Type: Improvement > Components: SQL >Reporter: Kazuaki Ishizaki > > When we run a spark program with a projection for a array type, nullcheck at > a call to write each element of an array is generated. If we know all of the > elements do not have {{null}} at compilation time, we can eliminate code for > nullcheck. > {code} > val df = sparkContext.parallelize(Seq(1.0, 2.0), 1).toDF("v") > df.selectExpr("Array(v + 2.2, v + 3.3)").collect > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org