spark git commit: [SQL] [MINOR] use catalyst type converter in ScalaUdf

2015-05-17 Thread yhuai
Repository: spark
Updated Branches:
  refs/heads/branch-1.4 e0632ffaf - be66d1924


[SQL] [MINOR] use catalyst type converter in ScalaUdf

It's a follow-up of https://github.com/apache/spark/pull/5154, we can speed up 
scala udf evaluation by create type converter in advance.

Author: Wenchen Fan cloud0...@outlook.com

Closes #6182 from cloud-fan/tmp and squashes the following commits:

241cfe9 [Wenchen Fan] use converter in ScalaUdf

(cherry picked from commit 2f22424e9f6624097b292cb70e00787b69d80718)
Signed-off-by: Yin Huai yh...@databricks.com


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/be66d192
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/be66d192
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/be66d192

Branch: refs/heads/branch-1.4
Commit: be66d1924edc5c99987c80d445f34a690c3789a9
Parents: e0632ff
Author: Wenchen Fan cloud0...@outlook.com
Authored: Sun May 17 16:51:57 2015 -0700
Committer: Yin Huai yh...@databricks.com
Committed: Sun May 17 16:52:21 2015 -0700

--
 .../org/apache/spark/sql/catalyst/expressions/ScalaUdf.scala | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/be66d192/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUdf.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUdf.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUdf.scala
index 9a77ca6..d22eb10 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUdf.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUdf.scala
@@ -956,7 +956,7 @@ case class ScalaUdf(function: AnyRef, dataType: DataType, 
children: Seq[Expressi
   }
 
   // scalastyle:on
-
-  override def eval(input: Row): Any = 
CatalystTypeConverters.convertToCatalyst(f(input), dataType)
+  val converter = CatalystTypeConverters.createToCatalystConverter(dataType)
+  override def eval(input: Row): Any = converter(f(input))
 
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SQL] [MINOR] use catalyst type converter in ScalaUdf

2015-05-17 Thread yhuai
Repository: spark
Updated Branches:
  refs/heads/master ca4257aec - 2f22424e9


[SQL] [MINOR] use catalyst type converter in ScalaUdf

It's a follow-up of https://github.com/apache/spark/pull/5154, we can speed up 
scala udf evaluation by create type converter in advance.

Author: Wenchen Fan cloud0...@outlook.com

Closes #6182 from cloud-fan/tmp and squashes the following commits:

241cfe9 [Wenchen Fan] use converter in ScalaUdf


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2f22424e
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2f22424e
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/2f22424e

Branch: refs/heads/master
Commit: 2f22424e9f6624097b292cb70e00787b69d80718
Parents: ca4257a
Author: Wenchen Fan cloud0...@outlook.com
Authored: Sun May 17 16:51:57 2015 -0700
Committer: Yin Huai yh...@databricks.com
Committed: Sun May 17 16:51:57 2015 -0700

--
 .../org/apache/spark/sql/catalyst/expressions/ScalaUdf.scala | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/2f22424e/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUdf.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUdf.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUdf.scala
index 9a77ca6..d22eb10 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUdf.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUdf.scala
@@ -956,7 +956,7 @@ case class ScalaUdf(function: AnyRef, dataType: DataType, 
children: Seq[Expressi
   }
 
   // scalastyle:on
-
-  override def eval(input: Row): Any = 
CatalystTypeConverters.convertToCatalyst(f(input), dataType)
+  val converter = CatalystTypeConverters.createToCatalystConverter(dataType)
+  override def eval(input: Row): Any = converter(f(input))
 
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org