[GitHub] ueshin commented on a change in pull request #23534: [SPARK-26610][PYTHON] Fix inconsistency between toJSON Method in Python and Scala.

2019-01-27 Thread GitBox
ueshin commented on a change in pull request #23534: [SPARK-26610][PYTHON] Fix 
inconsistency between toJSON Method in Python and Scala.
URL: https://github.com/apache/spark/pull/23534#discussion_r251281383
 
 

 ##
 File path: docs/sql-migration-guide-upgrade.md
 ##
 @@ -45,6 +45,8 @@ displayTitle: Spark SQL Upgrading Guide
 
   - In Spark version 2.4 and earlier, if 
`org.apache.spark.sql.functions.udf(Any, DataType)` gets a Scala closure with 
primitive-type argument, the returned UDF will return null if the input values 
is null. Since Spark 3.0, the UDF will return the default value of the Java 
type if the input value is null. For example, `val f = udf((x: Int) => x, 
IntegerType)`, `f($"x")` will return null in Spark 2.4 and earlier if column 
`x` is null, and return 0 in Spark 3.0. This behavior change is introduced 
because Spark 3.0 is built with Scala 2.12 by default.
 
+  - Since Spark 3.0, `DataFrame.toJSON()` in PySpark returns `DataFrame` of 
JSON string instead of `RDD`. The method in Scala/Java was changed to return 
`DataFrame` before, but the one in PySpark was not changed at that time. If you 
still want to return `RDD`, you can restore the previous behavior by setting 
`spark.sql.legacy.pyspark.toJsonShouldReturnDataFrame` to `false`.
 
 Review comment:
   Actually I'm still feeling it's inconsistent because the abstraction layer 
is different between RDD and DataFrame/Dataset. I guess users expect it returns 
something of the same abstraction, i.e., RDD returns RDD, DataFrame returns 
DataFrame, so I'd rather handle Dataset as DataFrame in Python than RDD.We 
might still need to discuss the function `map` or the behavior of 
`DataFrameReader.csv`/`.json` as @HyukjinKwon mentioned.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] ueshin commented on a change in pull request #23534: [SPARK-26610][PYTHON] Fix inconsistency between toJSON Method in Python and Scala.

2019-01-27 Thread GitBox
ueshin commented on a change in pull request #23534: [SPARK-26610][PYTHON] Fix 
inconsistency between toJSON Method in Python and Scala.
URL: https://github.com/apache/spark/pull/23534#discussion_r251281383
 
 

 ##
 File path: docs/sql-migration-guide-upgrade.md
 ##
 @@ -45,6 +45,8 @@ displayTitle: Spark SQL Upgrading Guide
 
   - In Spark version 2.4 and earlier, if 
`org.apache.spark.sql.functions.udf(Any, DataType)` gets a Scala closure with 
primitive-type argument, the returned UDF will return null if the input values 
is null. Since Spark 3.0, the UDF will return the default value of the Java 
type if the input value is null. For example, `val f = udf((x: Int) => x, 
IntegerType)`, `f($"x")` will return null in Spark 2.4 and earlier if column 
`x` is null, and return 0 in Spark 3.0. This behavior change is introduced 
because Spark 3.0 is built with Scala 2.12 by default.
 
+  - Since Spark 3.0, `DataFrame.toJSON()` in PySpark returns `DataFrame` of 
JSON string instead of `RDD`. The method in Scala/Java was changed to return 
`DataFrame` before, but the one in PySpark was not changed at that time. If you 
still want to return `RDD`, you can restore the previous behavior by setting 
`spark.sql.legacy.pyspark.toJsonShouldReturnDataFrame` to `false`.
 
 Review comment:
   Actually I'm still feeling it's inconsistent because the abstraction layer 
is different between RDD and DataFrame/Dataset. I guess users expect it returns 
something of the same abstraction, i.e., RDD returns RDD, DataFrame returns 
DataFrame, so I'd rather handle Dataset as DataFrame in Python than RDD.
   We might still need to discuss the function `map` or the behavior of 
`DataFrameReader.csv`/`.json` as @HyukjinKwon mentioned.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] ueshin commented on a change in pull request #23534: [SPARK-26610][PYTHON] Fix inconsistency between toJSON Method in Python and Scala.

2019-01-14 Thread GitBox
ueshin commented on a change in pull request #23534: [SPARK-26610][PYTHON] Fix 
inconsistency between toJSON Method in Python and Scala.
URL: https://github.com/apache/spark/pull/23534#discussion_r247518316
 
 

 ##
 File path: python/pyspark/sql/dataframe.py
 ##
 @@ -109,15 +109,18 @@ def stat(self):
 @ignore_unicode_prefix
 @since(1.3)
 def toJSON(self, use_unicode=True):
-"""Converts a :class:`DataFrame` into a :class:`RDD` of string.
+"""Converts a :class:`DataFrame` into a :class:`DataFrame` of JSON 
string.
 
-Each row is turned into a JSON document as one element in the returned 
RDD.
+Each row is turned into a JSON document as one element in the returned 
DataFrame.
 
 >>> df.toJSON().first()
-u'{"age":2,"name":"Alice"}'
+Row(value=u'{"age":2,"name":"Alice"}')
 """
-rdd = self._jdf.toJSON()
-return RDD(rdd.toJavaRDD(), self._sc, UTF8Deserializer(use_unicode))
+jdf = self._jdf.toJSON()
+if self.sql_ctx._conf.pysparkDataFrameToJSONShouldReturnDataFrame():
+return DataFrame(jdf, self.sql_ctx)
+else:
+return RDD(jdf.toJavaRDD(), self._sc, 
UTF8Deserializer(use_unicode))
 
 Review comment:
   That sounds interesting. Maybe we should fix `DataFrameReader.csv` and 
`DataFrameReader.json` to accept DataFrame of string in Python, regardless of 
the discussion here.
   Let me try.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] ueshin commented on a change in pull request #23534: [SPARK-26610][PYTHON] Fix inconsistency between toJSON Method in Python and Scala.

2019-01-14 Thread GitBox
ueshin commented on a change in pull request #23534: [SPARK-26610][PYTHON] Fix 
inconsistency between toJSON Method in Python and Scala.
URL: https://github.com/apache/spark/pull/23534#discussion_r247411271
 
 

 ##
 File path: python/pyspark/sql/dataframe.py
 ##
 @@ -109,15 +109,18 @@ def stat(self):
 @ignore_unicode_prefix
 @since(1.3)
 def toJSON(self, use_unicode=True):
-"""Converts a :class:`DataFrame` into a :class:`RDD` of string.
+"""Converts a :class:`DataFrame` into a :class:`DataFrame` of JSON 
string.
 
-Each row is turned into a JSON document as one element in the returned 
RDD.
+Each row is turned into a JSON document as one element in the returned 
DataFrame.
 
 >>> df.toJSON().first()
-u'{"age":2,"name":"Alice"}'
+Row(value=u'{"age":2,"name":"Alice"}')
 """
-rdd = self._jdf.toJSON()
-return RDD(rdd.toJavaRDD(), self._sc, UTF8Deserializer(use_unicode))
+jdf = self._jdf.toJSON()
+if self.sql_ctx._conf.pysparkDataFrameToJSONShouldReturnDataFrame():
+return DataFrame(jdf, self.sql_ctx)
+else:
+return RDD(jdf.toJavaRDD(), self._sc, 
UTF8Deserializer(use_unicode))
 
 Review comment:
   Good point, but I'm feeling it's natural to return DataFrame because I think 
DataFrame in Python is a corresponding expression of DataFrame/Dataset in 
Scala/Java, whereas to return RDD is weird for me.
   I can understand what you mean, and that is one of the reasons I added a 
config to restore the behavior, actually.
   cc @gatorsmile @cloud-fan 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org