Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68184305
--- Diff: R/pkg/R/DataFrame.R ---
@@ -194,7 +195,13 @@ setMethod("isLocal",
setMethod("showDF",
signature(x = "SparkDataFrame"),
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68179518
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -267,11 +267,13 @@ def isStreaming(self):
return self._jdf.isStreaming()
@since(1.3)
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68179336
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -267,11 +267,13 @@ def isStreaming(self):
return self._jdf.isStreaming()
@sinc
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68178484
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -267,11 +267,13 @@ def isStreaming(self):
return self._jdf.isStreaming()
@since(1.3)
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68036752
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -251,7 +253,11 @@ class Dataset[T] private[sql](
case seq: Seq[_
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68034224
--- Diff: .gitignore ---
@@ -77,3 +77,4 @@ spark-warehouse/
# For R session data
.RData
.RHistory
+.Rhistory
--- End diff --
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/13839
[SPARK-16128][SQL] Add truncateTo parameter to Dataset.show function.
## What changes were proposed in this pull request?
Allowing truncate to a specific number of character is convenien