Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14553#discussion_r79492259
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -72,13 +74,17 @@ class StreamExecution
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14803#discussion_r79488089
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -197,10 +197,13 @@ case class DataSource
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14803#discussion_r79488934
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala
---
@@ -608,6 +608,34 @@ class FileStreamSourceSuite extends
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15102#discussion_r79088749
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaSource.scala
---
@@ -0,0 +1,446 @@
+/*
+ * Licensed
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15102#discussion_r79089396
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaSource.scala
---
@@ -0,0 +1,446 @@
+/*
+ * Licensed
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15102#discussion_r79088295
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaSource.scala
---
@@ -0,0 +1,446 @@
+/*
+ * Licensed
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15102#discussion_r79088253
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaConsumer.scala
---
@@ -0,0 +1,186 @@
+/*
+ * Licensed
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15102#discussion_r79089110
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaSource.scala
---
@@ -0,0 +1,446 @@
+/*
+ * Licensed
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15102#discussion_r79089641
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaSource.scala
---
@@ -0,0 +1,446 @@
+/*
+ * Licensed
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15102#discussion_r79089541
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaSource.scala
---
@@ -0,0 +1,446 @@
+/*
+ * Licensed
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15102#discussion_r79088325
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaSource.scala
---
@@ -0,0 +1,446 @@
+/*
+ * Licensed
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15102#discussion_r79088914
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaSource.scala
---
@@ -0,0 +1,446 @@
+/*
--- End diff
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/15102
> This already does depend on most of the existing Kafka DStream
implementation
I pushed for this code to be copied rather than refactored because I think
this is the right direct
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15054#discussion_r79056087
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
---
@@ -360,6 +360,7 @@ trait CheckAnalysis extends
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/15023
Thanks for understanding! I do hope you guys upgrade eventually, there's a
lot of good stuff and 2.0.1 should be out in the near future. Please do report
any issues you see :)
---
If your
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/15023
Thanks for spending the time to backport this, but it does seem a little
risky to include changes to the configuration system in a maintenance release.
As such, I'd probably error on the side
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14728#discussion_r75786696
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala
---
@@ -17,21 +17,18 @@
package
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14124
@cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14356
/cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user marmbrus opened a pull request:
https://github.com/apache/spark/pull/14356
[SPARK-16724] Expose DefinedByConstructorParams
We don't generally make things in catalyst/execution private. Instead they
are just undocumented due to their lack of stability guarantees.
You
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14252
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14087
/cc @tdas
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14087#discussion_r71025786
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala
---
@@ -331,6 +331,24 @@ class FileStreamSourceSuite extends
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14214
Thanks for working on this, but I'm tempted to close this as "won't fix".
Its likely we are going to have to rewrite the incremental planner completely
for 2.1 and this is ju
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14170
Oh, I see what you are saying, although I'm not sure I agree with the
conclusion. Given that tests can run in parallel I don't think you actually
want to toggle back and forth between timezones
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14170
I think thats where we are today. All query tests use LA and the harness
configures that. The problem before this PR was this one suite was setting LA
(due to its base class), and then UTC (due
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14170
All the tests in SQL are written to assume `Los_Angeles`, so I think this
is actually desired. Otherwise people have to configure their machine
specially to run spark tests.
---
If your project
Repository: spark
Updated Branches:
refs/heads/branch-2.0 2e97f3a08 -> 7de183d97
[SPARK-16531][SQL][TEST] Remove timezone setting from
DataFrameTimeWindowingSuite
## What changes were proposed in this pull request?
It's unnecessary. `QueryTest` already sets it.
Author: Burak Yavuz
Repository: spark
Updated Branches:
refs/heads/master 01f09b161 -> 0744d84c9
[SPARK-16531][SQL][TEST] Remove timezone setting from
DataFrameTimeWindowingSuite
## What changes were proposed in this pull request?
It's unnecessary. `QueryTest` already sets it.
Author: Burak Yavuz
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14170
Thanks, merging to master and 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14170
We should put this in 2.0 for whoever merges.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14170
LGTM, can you make a JIRA? Its a little scary to change tests w/o one in
case there is flakiness.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14139
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13890#discussion_r70139445
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala ---
@@ -74,13 +74,71 @@ object RDDConversions
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14094
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14094#discussion_r69986165
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala
---
@@ -45,6 +47,7 @@ class FileStreamSource
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14094#discussion_r69985831
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala
---
@@ -26,6 +27,7 @@ import
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14030#discussion_r69819131
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/ForeachSink.scala
---
@@ -30,7 +32,42 @@ import org.apache.spark.sql.{DataFrame
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14030#discussion_r69819064
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -155,7 +155,7 @@ private[sql] object Dataset {
class Dataset[T] private[sql
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13873
/cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13890
/cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14002
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/14000
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13901
No tests?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13939
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13939#discussion_r68841421
--- Diff:
sql/hive/compatibility/src/test/scala/org/apache/spark/sql/hive/execution/HiveWindowFunctionQuerySuite.scala
---
@@ -569,6 +572,7 @@ class
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13939#discussion_r68840940
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -196,6 +185,10 @@ private[sql] class HiveSessionCatalog
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13862
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13862#discussion_r68148358
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala
---
@@ -43,13 +48,16 @@ case class PartitionedFile
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13718#discussion_r67740050
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala ---
@@ -211,6 +217,7 @@ class StreamSuite extends StreamTest
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13718#discussion_r67736904
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala ---
@@ -211,6 +217,7 @@ class StreamSuite extends StreamTest
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13718#discussion_r67734059
--- Diff: core/src/main/scala/org/apache/spark/util/ManualClock.scala ---
@@ -57,9 +59,19 @@ private[spark] class ManualClock(private var time: Long
Repository: spark
Updated Branches:
refs/heads/master 905f774b7 -> 5cfabec87
[SPARK-16050][TESTS] Remove the flaky test: ConsoleSinkSuite
## What changes were proposed in this pull request?
ConsoleSinkSuite just collects content from stdout and compare them with the
expected string.
Repository: spark
Updated Branches:
refs/heads/branch-2.0 0b0b5fe54 -> 363db9f8b
[SPARK-16050][TESTS] Remove the flaky test: ConsoleSinkSuite
## What changes were proposed in this pull request?
ConsoleSinkSuite just collects content from stdout and compare them with the
expected string.
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13776
LGTM, merging to master and 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13727#discussion_r67578285
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -228,4 +220,101 @@ class DataFrameReaderWriterSuite
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13727
A few comments. Overall LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13727#discussion_r67575918
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -228,4 +220,101 @@ class DataFrameReaderWriterSuite
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13727#discussion_r67575723
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -276,7 +267,45 @@ class DataFrameReader private[sql](sparkSession
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13727#discussion_r67575684
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -276,7 +267,45 @@ class DataFrameReader private[sql](sparkSession
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13740
There are examples in `quietly`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13740
LGTM and we should merge before the RC.
How hard to add a test? You could redirect stdout temporarily?
---
If your project is set up for it, you can reply to this email and have your
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13727#discussion_r67573034
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -228,4 +220,101 @@ class DataFrameReaderWriterSuite
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13727#discussion_r67572740
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -228,4 +220,101 @@ class DataFrameReaderWriterSuite
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13727#discussion_r67572585
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -228,4 +220,101 @@ class DataFrameReaderWriterSuite
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13727#discussion_r67572462
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -228,4 +220,101 @@ class DataFrameReaderWriterSuite
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13727#discussion_r67572121
--- Diff:
sql/core/src/test/java/test/org/apache/spark/sql/JavaDataFrameReaderWriterSuite.java
---
@@ -0,0 +1,158 @@
+/*
+* Licensed to the Apache
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13727#discussion_r67572068
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -368,6 +397,63 @@ class DataFrameReader private[sql](sparkSession
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13718
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13718#discussion_r67435503
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -545,6 +545,13 @@ object SQLConf {
.booleanConf
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13673
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13653#discussion_r67029686
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -0,0 +1,401 @@
+/*
+ * Licensed to the Apache
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13653
Overall looks pretty good. Feel free to merge after addressing comments /
passing tests to avoid more conflicts.
---
If your project is set up for it, you can reply to this email and have your
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13653#discussion_r67023372
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -0,0 +1,401 @@
+/*
+ * Licensed to the Apache
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13653#discussion_r67023216
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -0,0 +1,401 @@
+/*
+ * Licensed to the Apache
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13653#discussion_r67023075
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -0,0 +1,401 @@
+/*
+ * Licensed to the Apache
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13653#discussion_r67022595
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
---
@@ -0,0 +1,288 @@
+/*
+ * Licensed to the Apache
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13638
Hmmm, does not apply cleanly to 1.6. @ueshin if you have time it might be
nice to backport.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Repository: spark
Updated Branches:
refs/heads/branch-2.0 d5e60748b -> 83aa17d44
[SPARK-15915][SQL] Logical plans should use canonicalized plan when override
sameResult.
## What changes were proposed in this pull request?
`DataFrame` with plan overriding `sameResult` but not using
Repository: spark
Updated Branches:
refs/heads/master bc02d0112 -> c5b735581
[SPARK-15915][SQL] Logical plans should use canonicalized plan when override
sameResult.
## What changes were proposed in this pull request?
`DataFrame` with plan overriding `sameResult` but not using canonicalized
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13638
Yeah, sounds reasonable. Merging to master, 2.0 and 1.6.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13638#discussion_r66884087
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala ---
@@ -155,8 +156,9 @@ private[sql] class CacheManager extends Logging
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13638#discussion_r66882083
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala ---
@@ -155,8 +156,9 @@ private[sql] class CacheManager extends Logging
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13638
Seems reasonable. Is this a regression from 1.6?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13638#discussion_r66876008
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala ---
@@ -155,8 +156,9 @@ private[sql] class CacheManager extends Logging
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/8416
@rxin I believe I fixed that limitation in my recent refactoring.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13424
Thanks! Merged into master and 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Repository: spark
Updated Branches:
refs/heads/master aec502d91 -> 127a6678d
[SPARK-15489][SQL] Dataset kryo encoder won't load custom user settings
## What changes were proposed in this pull request?
Serializer instantiation will consider existing SparkConf
## How was this patch tested?
Repository: spark
Updated Branches:
refs/heads/branch-2.0 bc53422ad -> e6ebb547b
[SPARK-15489][SQL] Dataset kryo encoder won't load custom user settings
## What changes were proposed in this pull request?
Serializer instantiation will consider existing SparkConf
## How was this patch
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13147
Thanks, merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Repository: spark
Updated Branches:
refs/heads/master fb219029d -> 667d4ea7b
[SPARK-6320][SQL] Move planLater method into GenericStrategy.
## What changes were proposed in this pull request?
This PR moves `QueryPlanner.planLater()` method into `GenericStrategy` for
extra strategies to be
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13424
LGTM, can you update the description (it still says WIP).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13486
Merging to master and 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13597
Seems fine to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13604
I'm not sure I agree with all of the reasoning here. Here are my thoughts:
- `SQLContext` should probably not break any APIs (its only there for
compatibility anyway).
- In `SparkSession
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13549
This is okay for 2.0, but we'll need to rethink the way we are doing query
planning to handle incremental input.
---
If your project is set up for it, you can reply to this email and have your
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13549#discussion_r66361438
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationsSuite.scala
---
@@ -189,9 +189,20 @@ class
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13549#discussion_r66361373
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala
---
@@ -104,6 +104,31 @@ class StreamingAggregationSuite
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/13549#discussion_r66361125
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
---
@@ -43,6 +43,41 @@ object
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13424
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
601 - 700 of 8258 matches
Mail list logo