GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/22729
[SPARK-25737][CORE] Remove JavaSparkContextVarargsWorkaround
## What changes were proposed in this pull request?
Remove JavaSparkContextVarargsWorkaround
## How was this patch
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22727#discussion_r225179121
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2.scala
---
@@ -71,6 +71,12 @@ object
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22714
Shouldn't it go in `commonHeaderNodes`?
Looks like this was added waaay back in
https://github.com/JoshRosen/spark/commit/6aa08c39cf30fa5c4ed97f4fff16371b9030a2e6
by @tdas but never
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22706
It makes some sense, but how much difference does it make, performance-wise?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22705#discussion_r224965160
--- Diff:
core/src/main/scala/org/apache/spark/util/io/ChunkedByteBuffer.scala ---
@@ -195,7 +196,11 @@ object ChunkedByteBuffer {
val is = new
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22383#discussion_r224964998
--- Diff: project/MimaExcludes.scala ---
@@ -36,6 +36,8 @@ object MimaExcludes {
// Exclude rules for 3.0.x
lazy val v30excludes
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22383#discussion_r224965080
--- Diff: project/MimaExcludes.scala ---
@@ -36,9 +36,11 @@ object MimaExcludes {
// Exclude rules for 3.0.x
lazy val v30excludes
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22383#discussion_r224965113
--- Diff: project/MimaExcludes.scala ---
@@ -36,9 +36,11 @@ object MimaExcludes {
// Exclude rules for 3.0.x
lazy val v30excludes
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22703#discussion_r224936431
--- Diff: docs/streaming-kafka-0-10-integration.md ---
@@ -3,7 +3,11 @@ layout: global
title: Spark Streaming + Kafka Integration Guide (Kafka broker
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22414
Yeah, the test that failed here asserts that it's an `AnalysisException`. I
guess it could be removed. The thing is, many other cases are still handled as
`AnalysisException`. Maybe it'
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22690
Merged to master
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22670
I don't so much mean that much refactoring. I wonder if there are 1-2 other
places where common Kafka params are set in tests that we could add this to for
now, that kind of thing. This change
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22593
Merged to master/2.4
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22322
Ping @npoberezkin
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22670
@dilipbiswal I like this change too. The suite goes from 4:34 to 0:53. I
wonder if we can make this change elsewhere in general Kafka test config? This
kind of setting seems useful everywhere
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21322#discussion_r224874828
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -384,15 +385,30 @@ private[spark] class MemoryStore
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21322#discussion_r224875899
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1930,6 +1930,18 @@ private[spark] object Utils extends Logging
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21322#discussion_r224875111
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -384,15 +385,30 @@ private[spark] class MemoryStore
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22703#discussion_r224870517
--- Diff: python/pyspark/streaming/tests.py ---
@@ -1047,259 +1046,6 @@ def check_output(n):
self.ssc.stop(True, True)
-class
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22689
Merged to master/2.4
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22383#discussion_r224866566
--- Diff: project/MimaExcludes.scala ---
@@ -36,6 +36,8 @@ object MimaExcludes {
// Exclude rules for 3.0.x
lazy val v30excludes
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22700
Merged to master
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22678
Merged to master
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22657
Merged to master
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22645
Merged to master
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/21588
I know this is probably just reviving an old thread elsewhere, but, we
don't know how to update our 1.2.1 Hive fork anyway, it seems? if so, and the
fork is undesirable, seems like time to dr
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22703#discussion_r224621015
--- Diff: python/pyspark/streaming/tests.py ---
@@ -1047,259 +1046,6 @@ def check_output(n):
self.ssc.stop(True, True)
-class
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/22703
[SPARK-25705][BUILD][STREAMING] Remove Kafka 0.8 integration
## What changes were proposed in this pull request?
Remove Kafka 0.8 integration
## How was this patch tested
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22701#discussion_r224620279
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2150,8 +2150,10 @@ class Analyzer
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22692
Merged to master so I can get on with removing Kafka 0.8
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22594
Merged to master/2.4/2.3 as a clean simple bug fix
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22259#discussion_r224607076
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUDF.scala
---
@@ -47,7 +48,8 @@ case class ScalaUDF
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22701#discussion_r224606888
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala
---
@@ -351,8 +351,8 @@ class AnalysisSuite extends
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22692
Passes with Maven and SBT, and sounds like broad support. It's clean to
remove, so I'll go ahead for 3.0
---
-
To unsu
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22671
Merged to master
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22691
Merged to master
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22701#discussion_r224547656
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala
---
@@ -351,8 +351,8 @@ class AnalysisSuite extends
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22699
Agree that's a good idea
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22259#discussion_r224540341
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUDF.scala
---
@@ -47,7 +48,8 @@ case class ScalaUDF
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22259#discussion_r224540330
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUDF.scala
---
@@ -47,7 +48,8 @@ case class ScalaUDF
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22259#discussion_r224534401
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUDF.scala
---
@@ -47,7 +48,8 @@ case class ScalaUDF
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22593
Huh, I'm really confused why this would fail, or at least, start failing
right now. We use these HTML tags elsewhere. You could try updating the unidoc
plugin version, but I think it's a
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22657
Yeah, I could see the argument both ways for keeping all the tests in
CastSuite or just checking a subset. We already got the test down considerably,
though it's still like 24 seconds. Is ther
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22593
Hm, that's a weird error. Big javadoc failures from unrelated classes. This
looks like errors you get when you run javadoc on translated Scala classes. No
idea why it's just popped up.
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22259#discussion_r224512333
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUDF.scala
---
@@ -47,7 +48,8 @@ case class ScalaUDF
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22695
OK, but this kind of thing isn't worth opening a PR for. If you can maybe
get some related minor changes together, that
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22383
Yeah, you'll have to add this to the 3.0 excludes section of
project/MimaExcludes:
`ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.api.jav
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22259#discussion_r224506066
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUDF.scala
---
@@ -47,7 +48,8 @@ case class ScalaUDF
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22318
Oh I see there was indeed more discussion on this, and it does relate to
resolving columns to joined DataFrames. I don't know enough to bless this
change, but it seems reasonable. @maropu app
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22383
Oops, I mean SPARK-25362. SPARK-25395 was a duplicate.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22690
Yes, for 3.0. it's an old API mistake
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional com
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22259#discussion_r224263116
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUDF.scala
---
@@ -47,7 +48,8 @@ case class ScalaUDF
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/22692
[SPARK-25598][STREAMING][BUILD] Remove flume connector in Spark 3
## What changes were proposed in this pull request?
Removes all vestiges of Flume in the build, for Spark 3.
I don
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/22691
[SPARK-24109][CORE] Remove class SnappyOutputStreamWrapper
## What changes were proposed in this pull request?
Remove SnappyOutputStreamWrapper and other workaround now that new Snappy
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/22690
[SPARK-19287][CORE][STREAMING] JavaPairRDD flatMapValues requires function
returning Iterable, not Iterator
## What changes were proposed in this pull request?
Fix old oversight in API
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22383#discussion_r224228243
--- Diff: core/src/test/java/test/org/apache/spark/JavaAPISuite.java ---
@@ -476,10 +476,10 @@ public void leftOuterJoin() {
new Tuple2<>
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22689
I guess that doing nothing is better than an error screen. Is it possible
to just skip reading incomplete files here? I don't know this code well. That
sounds b
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22615
Merged to master. Note that the master hadoop 2.6 job will fail immediately
now, so ignore it. On the upside ... this job already wont' take much of any
time from the Jenkins cl
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22689
Should the Event Log be available for running apps? Or if it's not going to
work, disable it where it can't be shown, but I suppose that could be
difficult. This just silently sends you b
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22615
I tried a release build that causes `--pip` and `--r` to be set, and the
result looked OK. Both pyspark and R packages built and seemed normal. The
source build worked too and comes before binary
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22657#discussion_r224151216
--- Diff: core/src/test/scala/org/apache/spark/SparkFunSuite.scala ---
@@ -106,4 +107,14 @@ abstract class SparkFunSuite
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22593
Although it's the Scala API, it's callable from Java just as well. There's
no Java-specific API here. So, yeah, actually it makes sense to have javadoc
and scaladoc for this. And I
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22594#discussion_r224146203
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala
---
@@ -70,6 +70,8 @@ class FileScanRDD
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22641
Merged to master
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22671
I like the change. This test is down to 5 seconds now. Unfortunately I
don't see speedup in other Kafka tests, but I think we should leave th
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22593
Have a look at `Column.scala` and `Dataset.scala` in
`org.apache.spark.sql`. But, on a second look, this is how I see the lists
render:
https://user-images.githubusercontent.com/822522
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21816#discussion_r224128493
--- Diff:
core/src/test/scala/org/apache/spark/deploy/rest/StandaloneRestSubmitSuite.scala
---
@@ -83,6 +83,26 @@ class StandaloneRestSubmitSuite extends
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22414#discussion_r223861364
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/TimeWindow.scala
---
@@ -137,16 +139,44 @@ object TimeWindow
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/21524
Ping @tengpeng to update or close
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/21858
Ping @jaceklaskowski to update or close
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22671#discussion_r223776123
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -332,7 +332,9 @@ class KafkaSinkSuite extends
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22593
Ping @niofire to update or close
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22641#discussion_r223756260
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/CompressionCodecSuite.scala
---
@@ -262,7 +261,10 @@ class CompressionCodecSuite extends
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22671#discussion_r223752722
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -332,7 +332,9 @@ class KafkaSinkSuite extends
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22654#discussion_r223751904
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVUtils.scala
---
@@ -97,23 +97,22 @@ object CSVUtils
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22615
@felixcheung regarding building PIP and R in one release, yeah I was
wondering that too. Ideally it would just be one. If the build changes only
affect the source release, that's OK, as th
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22615#discussion_r223729888
--- Diff: docs/index.md ---
@@ -30,9 +30,6 @@ Spark runs on Java 8+, Python 2.7+/3.4+ and R 3.1+. For
the Scala API, Spark {{s
uses Scala
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22615#discussion_r223729095
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -71,7 +71,7 @@ class HadoopTableReader(
// Hadoop honors
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22675#discussion_r223728031
--- Diff: docs/ml-datasource.md ---
@@ -0,0 +1,51 @@
+---
+layout: global
+title: Data sources
+displayTitle: Data sources
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22654#discussion_r223726757
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVUtils.scala
---
@@ -97,23 +97,22 @@ object CSVUtils
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22654#discussion_r223724099
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -1826,4 +1826,13 @@ class CSVSuite extends QueryTest
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22676#discussion_r223722902
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVHeaderChecker.scala
---
@@ -0,0 +1,131 @@
+/*
+ * Licensed to
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22675#discussion_r223720371
--- Diff: docs/ml-datasource.md ---
@@ -0,0 +1,51 @@
+---
+layout: global
+title: Data sources
+displayTitle: Data sources
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22675#discussion_r223720808
--- Diff: docs/ml-datasource.md ---
@@ -0,0 +1,51 @@
+---
+layout: global
+title: Data sources
+displayTitle: Data sources
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22675#discussion_r223720105
--- Diff: docs/ml-datasource.md ---
@@ -0,0 +1,51 @@
+---
+layout: global
+title: Data sources
+displayTitle: Data sources
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22675#discussion_r223719759
--- Diff: docs/ml-datasource.md ---
@@ -0,0 +1,51 @@
+---
+layout: global
+title: Data sources
+displayTitle: Data sources
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22659
Merged to master
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22615
Yeah this does need to be in a public repo.
apache/spark-jenkins-configurations or something. We can ask INFRA to create
them. But, I'm not against just putting them in dev/ or something in the
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22615
I guess we've just pinged @shaneknapp ! But I figured the jobs would simply
fail and could be removed at leisure.
Yes, this mechanism is a little weird but may be the simplest thing he
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22623#discussion_r223491937
--- Diff:
core/src/test/scala/org/apache/spark/deploy/SparkSubmitSuite.scala ---
@@ -74,20 +74,27 @@ trait TestPrematureExit {
@volatile var
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/21755
Minor stuff: we usually tag this with `[MINOR]` in the title to be clear
there's no JIRA. Also ideal to batch together small related changes but I don't
know that there was anything else
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22672
The original change in #22631 made the test time go down from about 2:30 to
0:17. See build 96945:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/96945/testReport/junit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22649
This kind of stuff did fail when we were updating for 2.12 and we had to
make a lot of similar changes to the Java code for this reason, yeah
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22649
The pull request builder runs Scala 2.11, and this only becomes ambiguous
in 2.12 (long story). For now 2.12 is still a secondary build. I suspect we'll
switch it to be the primary scala ve
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22623
Just to check my understanding, `exitedCleanly` is `false` even when the
expected exception is thrown? OK that makes sense
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22595
CC @jerryshao for https://github.com/apache/spark/pull/14617 where this was
added. It looks like the display is on purpose, but can you clarify?
I don't think a "show additional co
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/21816
Ping @srinathshankar @ericl again for comments? I don't know this well, but
seems like a low risk change at worst.
---
---
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22466#discussion_r223395695
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -207,6 +207,16 @@ class SessionCatalog
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/22637
OK, trying this again. Tests have definitely run this time and we've had
another good pass at small review changes.
---
---
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22641#discussion_r223392700
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/CompressionCodecSuite.scala
---
@@ -262,7 +261,10 @@ class CompressionCodecSuite extends
601 - 700 of 15393 matches
Mail list logo