Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
I don't think this is necessarily my call but you're effectively writing a
hive udf, not a Spark one that depends on this. Writing a Spark UDF would
allow this to work just fi
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/17804#discussion_r113966339
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -108,6 +111,22 @@ class KafkaSinkSuite
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/17804#discussion_r113966362
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -108,6 +111,22 @@ class KafkaSinkSuite
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/17804#discussion_r113966304
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -108,6 +111,22 @@ class KafkaSinkSuite
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/17804#discussion_r113966089
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -26,13 +26,16 @@ import
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/17804#discussion_r113966069
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -26,13 +26,16 @@ import
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/17804
yeah it's broken in both, only visible in 1. Not sure if that needs to be
in 2 PRs.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/17804
cc @brkyvz this should be good to go.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/17804
[SPARK-20496][SS] Bug in KafkaWriter Looks at Unanalyzed Plans
## What changes were proposed in this pull request?
We didn't enforce analyzed plans in Spark 2.1 when writing out to
Github user anabranch closed the pull request at:
https://github.com/apache/spark/pull/17792
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/17792
[SPARK-20496][SS] Bug in KafkaWriter Looks at Unanalyzed Plans
## What changes were proposed in this pull request?
We didn't enforce analyzed plans in Spark 2.1 when writing out to
Github user anabranch closed the pull request at:
https://github.com/apache/spark/pull/17787
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/17787#discussion_r113792381
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -108,6 +108,31 @@ class KafkaSinkSuite
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/17787#discussion_r113792301
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -108,6 +108,31 @@ class KafkaSinkSuite
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/17787
[SS] Bug in KafkaWriter Looks at Unanalyzed Plans
## What changes were proposed in this pull request?
This now asserts that a plan has been analyzed before reading the schema
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/17695
Thanks for the info @srowen - this should be better now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/17695
This should be on hold until a JIRA resolution, I'd like to hear what
others say.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/17695
[SPARK-20400][DOCS] Remove References to 3rd Party Vendor Tools
## What changes were proposed in this pull request?
Simple documentation change to remove explicit vendor references
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
Manual Correctness tests:
Python
```
>>> from pyspark.sql.functions import to_date, to_timestamp, lit
>>>
spark.range(1).select(to_date(lit('2016-01-0
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@cloud-fan great questions.
I thought that was strange too. However this is the **current** behavior as
well as Java `SimpleDateFormat`'s behavior. I did not implement that
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@felixcheung thanks, made those changes :). Hopefully this will start
passing sometime :P
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r99371741
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1048,69 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r99283708
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1048,64 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r99278789
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -1177,6 +1177,9 @@ test_that("column functions", {
c17 <- cov(c, c1) +
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r99278746
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1048,64 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r99278738
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1048,64 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r99278624
--- Diff: R/pkg/R/functions.R ---
@@ -1746,7 +1750,7 @@ setMethod("toRadians",
#' to_date(df$c)
#' to_da
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@cloud-fan The error i see is the one in this [test
case](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71913/testReport/org.apache.spark.sql.catalyst/ExpressionToSQLSuite
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@cloud-fan - Reynold referred me to your for this test failure.
My two tests are failing because Hive tests *allegedly* cover something
like this.
```
SELECT to_date('
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@felixcheung Thank you for your feedback! Small request, can you tell me if
my R test case is sufficient for this? It doesn't seem like there is extensive
R testing right now for virtuall
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97470798
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97470763
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97470784
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97470589
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97470513
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97469354
--- Diff: R/pkg/R/generics.R ---
@@ -1274,6 +1270,14 @@ setGeneric("unbase64", function(x) {
standardGeneric("unbase64"
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97469259
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97469263
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@felixcheung yup! Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@felixcheung Just tried that it doesn't seem to work.
Here's the strange thing, it should follow the *exact* same structure as
[unix_timestamp](https://github.com/apache/spark/b
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@felixcheung could you give me some pointers on these R functions? I don't
quite know if I am registering them correctly and they're failing my builds.
---
If your project is set up f
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r95214266
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1048,60 @@ case class ToDate
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16504
jenkins test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16504#discussion_r95092780
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -730,8 +730,9 @@ def join(self, other, on=None, how=None):
a join expression (Column), or
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16504#discussion_r95092726
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -730,8 +730,9 @@ def join(self, other, on=None, how=None):
a join expression (Column), or
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16505#discussion_r95092215
--- Diff: R/pkg/R/functions.R ---
@@ -3324,7 +3325,8 @@ setMethod("percent_rank",
#' The difference between rank and denseRank is
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16504#discussion_r95092148
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2313,9 +2313,9 @@ setMethod("dropDuplicates",
#' @param joinExpr (Optional) The expression used
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16504#discussion_r95092135
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2313,9 +2313,9 @@ setMethod("dropDuplicates",
#' @param joinExpr (Optional) The expression used
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/16505
[SPARK-19127][DOCS] Update Rank Function Documentation
## What changes were proposed in this pull request?
- [X] Fix inconsistencies in function reference for dense rank and dense
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/16504
[SPAKR-19126][Docs] Update Join Documentation Across Languages
## What changes were proposed in this pull request?
- [X] Make sure all join types are clearly mentioned
- [X] Make
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
I believe now why my previous implementation did not work.
My implementation originally looked like this:
```scala
case class ParseToTimestamp(left: Expression, format
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r95071556
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1048,60 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r95071450
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -389,6 +389,20 @@ class
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r95071452
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala
---
@@ -342,7 +342,8 @@ object FunctionRegistry
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r95071447
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -389,6 +389,20 @@ class
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r95071721
--- Diff: python/pyspark/sql/functions.py ---
@@ -143,6 +143,12 @@ def _():
'measured in radians.',
}
+_fun
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16180
@srowen completed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16180#discussion_r91865764
--- Diff: docs/programming-guide.md ---
@@ -1345,14 +1345,17 @@ therefore be efficiently supported in parallel.
They can be used to implement co
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r91403382
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1047,57 @@ case class ToDate
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
More details are here:
https://gist.github.com/anabranch/7a42292593976878eb14e2d86a9966d4
This is completely perplexing to me.
---
If your project is set up for it, you can reply to
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16180
@srowen that should be a bit better but please let me know if it's still
unclear.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/16180
[DOCS][MINOR] Clarify Where AccumulatorV2s are Displayed
## What changes were proposed in this pull request?
This PR clarifies where accumulators will be displayed.
## How was
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
Now that my outputs are correct (in format), there's a new problem. The
types are *still* wrong.
```
scala> /// DETAILS
scala> // Schema
scala>
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
and I found the error, I shouldn't be overriding the `DataType`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pr
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r91173386
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1047,53 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r91172169
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1047,53 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r91156380
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1047,53 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r91134199
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1047,53 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r91134077
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1047,53 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r90899108
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala ---
@@ -351,34 +351,81 @@ class DateFunctionsSuite extends QueryTest with
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r90788929
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala ---
@@ -351,34 +351,81 @@ class DateFunctionsSuite extends QueryTest with
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r90785863
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2661,12 +2661,30 @@ object functions {
def unix_timestamp(s: Column
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r90785148
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2661,12 +2661,30 @@ object functions {
def unix_timestamp(s: Column
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r90785129
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2661,12 +2661,31 @@ object functions {
def unix_timestamp(s: Column
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r90785106
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2666,7 +2666,18 @@ object functions {
* @group datetime_funcs
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/16138
[WIP][Spark-16609] Add to_date with format function.
## What changes were proposed in this pull request?
This pull request adds a user facing `to_date` function that allows for a
format
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/15815
failures also seem unrelated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/15815#discussion_r88382198
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDD.scala ---
@@ -99,6 +99,8 @@ class JavaRDD[T](val rdd: RDD[T])(implicit val classTag
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/15815#discussion_r88273798
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDD.scala ---
@@ -99,6 +99,8 @@ class JavaRDD[T](val rdd: RDD[T])(implicit val classTag
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/15815
The test failure seems quite unrelated but we'll see if it happens again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/15815#discussion_r87699964
--- Diff: R/pkg/R/DataFrame.R ---
@@ -936,7 +936,9 @@ setMethod("unique",
#' Sample
#'
-#' Return
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/15815
@srowen Think this is probably ready.
- [ ] Updated All Languages
- [ ] Updated Ticket Description
---
If your project is set up for it, you can reply to this email and have your
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/15815#discussion_r87696862
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1612,7 +1612,9 @@ class Dataset[T] private[sql
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/15815#discussion_r87696853
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1612,7 +1612,9 @@ class Dataset[T] private[sql
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/15815
Sounds good to me. I will update it shortly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/15815
[DOCS][SPARK-18365] Documentation is Switched on Sample Methods
## What changes were proposed in this pull request?
The documentation for sample was switch for the two methods that take
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68792373
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68792300
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68792137
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68789246
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68791843
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68790342
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68790047
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68790860
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68790649
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68790090
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/13916
jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/13916
[SPARK-16220] Revert Change to Bring Back SHOW FUNCTIONS Functionality
## What changes were proposed in this pull request?
- Fix tests regarding show functions functionality
- Revert
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13041#discussion_r62793571
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/DefaultSource.scala
---
@@ -61,7 +61,9 @@ class DefaultSource extends
Github user anabranch commented on the pull request:
https://github.com/apache/spark/pull/13041#issuecomment-218341617
cc: @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
1 - 100 of 115 matches
Mail list logo