http://git-wip-us.apache.org/repos/asf/spark-website/blob/d4f0c34a/site/docs/2.1.1/api/java/org/apache/spark/JobExecutionStatus.html
--
diff --git a/site/docs/2.1.1/api/java/org/apache/spark/JobExecutionStatus.html
http://git-wip-us.apache.org/repos/asf/spark-website/blob/d4f0c34a/site/docs/2.1.1/api/java/org/apache/spark/ComplexFutureAction.html
--
diff --git a/site/docs/2.1.1/api/java/org/apache/spark/ComplexFutureAction.html
http://git-wip-us.apache.org/repos/asf/spark-website/blob/d4f0c34a/site/docs/2.1.1/api/R/randomSplit.html
--
diff --git a/site/docs/2.1.1/api/R/randomSplit.html
b/site/docs/2.1.1/api/R/randomSplit.html
new file mode 100644
index
http://git-wip-us.apache.org/repos/asf/spark-website/blob/e4019e64/site/news/spark-1-4-1-released.html
--
diff --git a/site/news/spark-1-4-1-released.html
b/site/news/spark-1-4-1-released.html
index d4327a4..faf7639 100644
---
http://git-wip-us.apache.org/repos/asf/spark-website/blob/e4019e64/site/release-process.html
--
diff --git a/site/release-process.html b/site/release-process.html
index 4dded93..7782ab0 100644
--- a/site/release-process.html
+++
Add Spark 2.1.1 release.
Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/e4019e64
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/e4019e64
Diff:
Repository: spark-website
Updated Branches:
refs/heads/asf-site 09046892b -> e4019e64c
http://git-wip-us.apache.org/repos/asf/spark-website/blob/e4019e64/site/sitemap.xml
--
diff --git a/site/sitemap.xml b/site/sitemap.xml
Repository: spark
Updated Tags: refs/tags/v2.1.1-rc3 [deleted] 2ed19cff2
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Tags: refs/tags/v2.1.1-rc2 [deleted] 02b165dcc
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Tags: refs/tags/v2.1.1-rc4 [deleted] 267aca5bd
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Tags: refs/tags/v2.1.1-rc1 [deleted] 30abb95c9
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Tags: refs/tags/v2.1.1 [created] 267aca5bd
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Author: marmbrus
Date: Tue May 2 01:05:29 2017
New Revision: 19436
Log:
Add spark-2.1.1-rc4
Added:
dev/spark/spark-2.1.1-rc4/
dev/spark/spark-2.1.1-rc4/SparkR_2.1.1.tar.gz (with props)
dev/spark/spark-2.1.1-rc4/SparkR_2.1.1.tar.gz.asc
dev/spark/spark-2.1.1-rc4/SparkR_2.1.1
Author: marmbrus
Date: Tue May 2 01:06:55 2017
New Revision: 19437
Log:
Release Spark 2.1.1
Added:
release/spark/spark-2.1.1/
- copied from r19436, dev/spark/spark-2.1.1-rc4/
Removed:
dev/spark/spark-2.1.1-rc4
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17765#discussion_r113827924
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -825,6 +832,11 @@ class StreamExecution
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17765#discussion_r113593037
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -252,6 +252,7 @@ class StreamExecution
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17765#discussion_r113560170
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -825,6 +833,11 @@ class StreamExecution
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17765#discussion_r113560096
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -252,6 +252,7 @@ class StreamExecution
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17594
LGTM, for fixing the issue with the test. We should separately decide if
this is really the behavior we want for the commit log.
---
If your project is set up for it, you can reply to this email
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17594#discussion_r110735241
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -304,8 +304,8 @@ class StreamExecution
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17488#discussion_r109070462
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamTest.scala ---
@@ -490,6 +490,18 @@ trait StreamTest extends QueryTest
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17488#discussion_r109069839
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FlatMapGroupsWithStateExec.scala
---
@@ -68,6 +68,17 @@ case class
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17398#discussion_r107985278
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/encoders/EncoderResolutionSuite.scala
---
@@ -62,6 +66,54 @@ class
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17252
Thanks for working on this, but I think this is inconsistent with other
APIs in Spark. Also for things like the foreach sink, you might actually be
expecting the option to affect the partitioning
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17361
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17361#discussion_r107307806
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/streaming/KeyedStateTimeout.java
---
@@ -34,9 +32,20 @@
@InterfaceStability.Evolving
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17361#discussion_r107307722
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
---
@@ -147,49 +147,68 @@ object
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17361#discussion_r107307617
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/FlatMapGroupsWithStateSuite.scala
---
@@ -519,6 +588,52 @@ class
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17361#discussion_r107307367
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/KeyedStateImpl.scala
---
@@ -17,37 +17,45 @@
package
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17361#discussion_r107304893
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/FlatMapGroupsWithStateSuite.scala
---
@@ -519,6 +588,52 @@ class
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17361#discussion_r107304618
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
---
@@ -147,49 +147,68 @@ object
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17361#discussion_r107304531
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
---
@@ -147,49 +147,68 @@ object
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17361#discussion_r107304196
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
---
@@ -147,49 +147,68 @@ object
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17361#discussion_r107304133
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/streaming/KeyedStateTimeout.java
---
@@ -34,9 +32,20 @@
@InterfaceStability.Evolving
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17371
I don't think that will solve the problem though. You will just get a
different error message.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17371
I really think the core problem here is that we allow you to use resolved
attributes at all in the user API. Unfortunately we are somewhat stuck with
that bad decision. Personally, I never use
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17268
Sorry I'm still not sure if this is a good idea.
Why disallow the following,
```scala
spark
.readStream
.withWatermark("eventTime", &
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17268
Sorry, I wasn't suggestion we mandate this. There may be use cases where
users are okay deduping a short lived stream w/o a watermark. I'm only saying
the timestamp is mandatory
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17179#discussion_r105991219
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/KeyedState.scala ---
@@ -61,25 +65,50 @@ import
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17179#discussion_r105990971
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/KeyedState.scala ---
@@ -61,25 +65,50 @@ import
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17179#discussion_r105990594
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala ---
@@ -249,6 +250,43 @@ class KeyValueGroupedDataset[K, V] private
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17179#discussion_r105822080
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/KeyedState.scala ---
@@ -61,25 +65,50 @@ import
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17179#discussion_r105821698
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala ---
@@ -298,12 +368,14 @@ class KeyValueGroupedDataset[K, V] private
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17179#discussion_r105823059
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FlatMapGroupsWithStateExec.scala
---
@@ -0,0 +1,270
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17179#discussion_r105822317
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/KeyedState.scala ---
@@ -61,25 +65,50 @@ import
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17179#discussion_r105822109
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/KeyedState.scala ---
@@ -61,25 +65,50 @@ import
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17179#discussion_r105821496
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala ---
@@ -249,6 +250,43 @@ class KeyValueGroupedDataset[K, V] private
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17268
Say the eventtime column chosen is the time of delivery into something like
Kafka. Due to retries we end up with two events with different timestamps.
Consider the following stream
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17268
I'm mixed if we want this to happen implicitly. Here's how I think about
the tradeoffs for this change: On the pro side, with this change we avoid the
case where the user forgets to include
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17228
LGTM too
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17087
There appears to have been some code drift (as `GeneratePredicate` and
`InterpretedPredicate` both used to return a class that inherited from a common
interface), but I don't think its hard
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17219#discussion_r105061613
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/OffsetCommitLog.scala
---
@@ -0,0 +1,61 @@
+/*
+ * Licensed
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17219#discussion_r105062302
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/Trigger.scala ---
@@ -38,6 +38,26 @@ sealed trait Trigger
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17219#discussion_r105062818
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -377,17 +385,25 @@ class StreamExecution
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17219#discussion_r105062498
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -284,6 +291,7 @@ class StreamExecution
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17219#discussion_r105061689
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/OffsetCommitLog.scala
---
@@ -0,0 +1,61 @@
+/*
+ * Licensed
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17219#discussion_r105062343
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/OffsetCommitLog.scala
---
@@ -0,0 +1,61 @@
+/*
+ * Licensed
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17153
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17087
I don't think we need a complex refactoring. Why can't `newPredicate`
catch the exception, log a warning and return an interpreted `Predicate`?
---
If your project is set up for it, you can
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17201
/cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user marmbrus opened a pull request:
https://github.com/apache/spark/pull/17201
[SPARK-18055][SQL] Use correct mirror in ExpresionEncoder
Previously, we were using the mirror of passed in `TypeTag` when reflecting
to build an encoder. This fails when the outer class
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17183
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17199
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17183#discussion_r104806617
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala
---
@@ -361,7 +361,7 @@ case class
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17087#discussion_r104790500
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/basicPhysicalOperators.scala
---
@@ -213,12 +217,30 @@ case class FilterExec(condition
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17087
I agree with the general approach of having a fallback from code generation
to interpreted evaluation, but I also agree that this feels too narrowly
targeted. In particular, why do this in one
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/16981
yeah, LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17044
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17044#discussion_r104258607
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -709,12 +717,13 @@ class StreamExecution
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r104253528
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -480,23 +480,45 @@ case class JsonTuple
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r104253484
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -480,23 +480,45 @@ case class JsonTuple
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17120
Note streams can be very long running, so this isn't about some short
window. It could even be that I'm moving to a different bucket (but don't want
to loose my exactly once guarantees of a very
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17120
The use case here is when you have truly unique filenames (i.e. they
contain a guid). This is actually pretty common in my experience. We
definitely shouldn't turn this on by default
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/17070
/cc @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r103337028
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2969,11 +2969,27 @@ object functions
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r103302035
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -480,36 +480,79 @@ case class JsonTuple
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r103300622
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2969,11 +2969,27 @@ object functions
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/16929
Hmm, I'm not sure we want to change this to a generator. I think that has
performance consequences as well as possibly being surprising. I would
probably make it possible to handle arrays (when
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/16929
/cc @brkyvz
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/16987
I spoke too soon, sorry! Thinking about it more the deterministic filename
solution is not great as the number of partitions could change for several
reasons.
Given that would you mind
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/16987
Thanks for working on this, however I'm not sure if we want to go with this
approach. In Spark 2.2, I think we should consider deprecating the manifest
files and instead use deterministic file
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16970#discussion_r101862301
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2006,15 +2006,19 @@ class Dataset[T] private[sql
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16970#discussion_r101834289
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
---
@@ -35,6 +35,9 @@ object
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/16929
I agree that its wrong to truncate, but why not just fix handling of arrays
rather than disallow it?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/15918
@windpiger, were you still working on this? I think it would be a useful
feature if we can get the tests to pass.
---
If your project is set up for it, you can reply to this email and have your
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r99255963
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/KeyedState.scala ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/16664
I think @sameeragarwal plans to review. I glanced and it looks fine.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98802935
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/StateImpl.scala ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98802826
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/StateImpl.scala ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98790560
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/StateImpl.scala ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98787267
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/StateImpl.scala ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98778359
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -313,6 +313,25 @@ abstract class SparkStrategies extends
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98776316
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala ---
@@ -219,6 +219,160 @@ class KeyValueGroupedDataset[K, V] private
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98774221
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala ---
@@ -219,6 +219,160 @@ class KeyValueGroupedDataset[K, V] private
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98779817
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StatefulAggregate.scala
---
@@ -235,3 +240,86 @@ case class StateStoreSaveExec
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98778114
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/State.scala ---
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98779663
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StatefulAggregate.scala
---
@@ -235,3 +240,86 @@ case class StateStoreSaveExec
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98775275
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala ---
@@ -219,6 +219,160 @@ class KeyValueGroupedDataset[K, V] private
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98778548
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/StateImpl.scala ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16758#discussion_r98779439
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StatefulAggregate.scala
---
@@ -235,3 +240,86 @@ case class StateStoreSaveExec
201 - 300 of 8258 matches
Mail list logo