Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22485#discussion_r219916796
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
---
@@ -168,6 +170,15 @@ protected void serviceInit
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/22485
Sorry for not following through on getting this into Apache.
FWIW, it's been in the Palantir fork of Spark for over a year:
https://github.com/palantir/spark/search?q=SPARK-18364_q=SPARK
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/21334
Hi @rubenfiszel thanks for the contribution! Can you please take a glance
through http://spark.apache.org/contributing.html to see the best way to get
your change merged into Apache Spark
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/20372
Tagging folks who have touched this code recently: @vgankidi @ericl @davies
This seems to provide a more compact packing in every scenario, which
should improve execution times. One risk
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/20372
Please fix the scala style checks --
```
Running Scala style checks
Scalastyle checks failed at following
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20372#discussion_r163464207
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
---
@@ -445,16 +445,25 @@ case class FileSourceScanExec
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/20372
Jenkins, this is ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20372#discussion_r163419745
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategySuite.scala
---
@@ -142,15 +142,16 @@ class
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20372#discussion_r163424784
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
---
@@ -445,16 +445,25 @@ case class FileSourceScanExec
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20372#discussion_r163419675
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategySuite.scala
---
@@ -142,15 +142,16 @@ class
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20372#discussion_r163424415
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
---
@@ -445,16 +445,25 @@ case class FileSourceScanExec
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/19917
Add failing test for select with a splatted stream
## What changes were proposed in this pull request?
Add additional test.
## How was this patch tested?
Additional test
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/19257
@cloud-fan @gatorsmile any more changes needed on this PR before merging?
I don't see any un-addressed comments left
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/19829
Looks like a fix for https://issues.apache.org/jira/browse/SPARK-19552 --
should that be reopened now that netty is deprecating 4.0.x so we can't do it
"Later&quo
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19702#discussion_r151569727
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaConverter.scala
---
@@ -372,23 +381,18 @@ private[parquet
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19708#discussion_r151010772
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SaveIntoDataSourceCommandSuite.scala
---
@@ -0,0 +1,48
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/19708
Jenkins, this is ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/19694
[SPARK-22470][DOC][SQL] functions.hash is also used internally for shuffle
and bucketing
## What changes were proposed in this pull request?
Add clarifying documentation to the scaladoc
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19468#discussion_r147021138
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodFactory.scala
---
@@ -0,0 +1,229
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/19574
[SPARK-21991][LAUNCHER][FOLLOWUP] Fix java lint
## What changes were proposed in this pull request?
Fix java lint
## How was this patch tested?
Run `./dev/lint-java`
You
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/19217
https://github.com/apache/spark/pull/19574
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/19217
```
Running Java style checks
Using
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/19217
@nivox can you please update the PR title when you get the chance?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/19217
How about `[SPARK-21991][LAUNCHER] Fix race condition in
LauncherServer#acceptConnections` ?
---
-
To unsubscribe, e-mail
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19217#discussion_r146493105
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/LauncherServer.java ---
@@ -232,20 +232,20 @@ public void run
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19217#discussion_r146492057
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/LauncherServer.java ---
@@ -232,20 +232,20 @@ public void run
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19217#discussion_r146493161
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/LauncherServer.java ---
@@ -232,20 +232,20 @@ public void run
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/19486
Updated
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19269#discussion_r145296142
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2Command.scala
---
@@ -0,0 +1,114
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/19486
[SPARK-22268][BUILD] Fix lint-java
## What changes were proposed in this pull request?
Fix java style issues
## How was this patch tested?
Run `./dev/lint-java` locally
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/19468
Jenkins, ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/19131
A check for unused imports should be added to scalastyle to prevent these
from creeping back in. If this PR was accompanied with that check (failing
before, now passing) I think the merge conflicts
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/19164
[SPARK-21953] Show both memory and disk bytes spilled if either is present
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ash211/spark patch-3
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19136#discussion_r137469790
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/ReadTask.java ---
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19136#discussion_r137471056
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/upward/StatisticsSupport.java
---
@@ -0,0 +1,26 @@
+/*
+ * Licensed
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/19153
SPARK-21941 Stop storing unused attemptId in SQLTaskMetrics
## What changes were proposed in this pull request?
In a driver heap dump containing 390,105 instances of SQLTaskMetrics
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/19088
[SPARK-21875][BUILD] Fix Java style bugs
## What changes were proposed in this pull request?
Fix Java code style so `./dev/lint-java` succeeds
## How was this patch tested
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/18996
[MINOR][TYPO] Fix typos: runnning and Excecutors
## What changes were proposed in this pull request?
Fix typos
## How was this patch tested?
Existing tests
You can merge
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18913#discussion_r132721021
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1792,6 +1796,9 @@ class SparkContext(config: SparkConf) extends Logging
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/18913
[SPARK-21563][CORE] Fix race condition when serializing TaskDescriptions
and adding jars
## What changes were proposed in this pull request?
Fix the race condition when serializing
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18877#discussion_r132583520
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/ChildProcAppHandle.java ---
@@ -118,14 +116,40 @@ void setChildProc(Process childProc, String
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18877#discussion_r132583553
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/ChildProcAppHandle.java ---
@@ -166,4 +185,15 @@ private synchronized void fireEvent(boolean
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/18789
@srowen sorry for not picking up on this -- thanks for pushing it over the
finish line in your PR!
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/18789
Bump jackson from 2.6.5 to 2.6.7.1 (#241)
This brings in a security fix for CVE-2017-7525 in the jackson-databind
library, which Spark uses.
When releasing this patch, upstream released
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/18658
FYI for future reviewers as well, we've been running an [extremely similar
patch](https://github.com/palantir/spark/pull/181) to PJ's on our distribution
of Spark for the past several months and had
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18658#discussion_r127819158
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -1037,24 +1037,22 @@ object
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/18658
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/18621
jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18581#discussion_r126534985
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/source/libsvm/LibSVMOptions.scala ---
@@ -41,11 +41,15 @@ private[libsvm] class LibSVMOptions(@transient
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/18176
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/18406
@robert3005 looks like a bunch of tests are failing with
`java.lang.IllegalArgumentException: A metric named
local-1498509661743.driver.HiveExternalCatalog.fileCacheHits already exists
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/18427
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/18406
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/18209
@manojlds I'm a part of the Spark-on-k8s team that's currently building k8s
integration for Spark outside of the Apache Spark repo. You can follow our
work at https://github.com/apache-spark-on-k8s
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/17935
@JoshRosen what was the other type of database you were using?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/18176
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/17680
Are there any comments on this PR or is it ready to be merged?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/17680
Any further thoughts on this? It was quite surprising for one of our users
so I wanted to make sure it was fixed in a future Apache release
---
If your project is set up for it, you can reply
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17680#discussion_r112531312
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
---
@@ -536,4 +537,43 @@ class
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17680#discussion_r112529697
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategySuite.scala
---
@@ -487,6 +487,20 @@ class
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17680#discussion_r112530989
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
---
@@ -536,4 +537,43 @@ class
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17680#discussion_r112285677
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
---
@@ -536,4 +537,43 @@ class
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17680#discussion_r112285883
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
---
@@ -536,4 +537,43 @@ class
Github user ash211 closed the pull request at:
https://github.com/apache/spark/pull/17667
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/17667
Agreed, will close for now until there's a fix to go along with the test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/17667
@HyukjinKwon thanks for looking at this! Please feel free to open a Jira
so we can begin discussing a fix. I haven't started working on a patch yet,
only have the test case at this point. So
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/17667
Failing test for parquet predicate pushdown on dots with columns
// checking against Jenkins to make sure this is still live on master
You can merge this pull request into a Git repository
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/17664
Typo fix: distitrbuted -> distributed
## What changes were proposed in this pull request?
Typo fix: distitrbuted -> distributed
## How was this patch tested?
Ex
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/17401
@jerryshao ready for re-review
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r108562943
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleServiceMetrics.java
---
@@ -0,0 +1,123 @@
+/*
+ * Licensed
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/17401
Ready for further review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/17411
Thanks for the contribution @juanrh ! I'm happy to see contributions no
matter how small. For larger changes you would need to file a Jira ticket, but
this is small enough that it's not necessary
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/17401
Thanks again for the comments @jerryshao ! I've now added some tests to
verify that the metrics get converted in the expected way to the collector, and
camel-cased shuffleService
---
If your
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r107840148
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleServiceMetrics.java
---
@@ -0,0 +1,118 @@
+/*
+ * Licensed
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r107840109
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
---
@@ -166,6 +170,23 @@ protected void serviceInit
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/17401
Thanks for taking a look @jerryshao ! I've reformatted to two-space
indentation and run `./dev/lint-java` to make sure this code passes the linter.
`src/main/java/org/apache/spark/sql
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/17401
[SPARK-18364][YARN] Expose metrics for YarnShuffleService
Registers the shuffle server's metrics with the Hadoop Node Manager's
DefaultMetricsSystem.
## What changes were proposed
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/17399
Thanks for contributing to Spark @roxannemoslehi !
I think Sean just means updating the title to something more like `[DOCS]
Clarify round mode in format_number function`. It doesn't feel
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/14615
Jenkins, this is ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/14615
@robert3005 looks like this has unit test failures on
`org.apache.spark.sql.hive.orc.OrcSourceSuite.SPARK-19459/SPARK-18220: read
char/varchar column written by Hive` -- is that a flake
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/14615
Jenkins, this is ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/16959
Any last changes before merging?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16959#discussion_r103576994
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
---
@@ -195,6 +195,17 @@ class OutputCommitCoordinatorSuite
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/16959
@vanzin are you right person to review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16959#discussion_r101762924
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/OutputCommitCoordinator.scala ---
@@ -48,25 +48,28 @@ private[spark] class OutputCommitCoordinator
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/16575
@hvanhovell does that description make sense?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/16503
Making this idempotent looks great. I think there's a separate issue with
this code still not handling poorly-timed preemption, but let's deal with that
in a separate ticket / PR.
Good
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/16575
@hvanhovell the need is also explained in the Jira ticket:
https://issues.apache.org/jira/browse/SPARK-19213
Does that code snippet make sense?
---
If your project is set up for it, you
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/16503
@jinxing64 can you please fix the failing Scala style tests?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/16503
You covered my concerns!
I think this will fix some parts of this problem for sure, not sure if it
covers every possible case though.
---
If your project is set up for it, you can reply
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16503#discussion_r95660339
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/OutputCommitCoordinator.scala ---
@@ -165,9 +167,14 @@ private[spark] class OutputCommitCoordinator
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16503#discussion_r95659921
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
---
@@ -221,6 +227,22 @@ private case class
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16503#discussion_r95659396
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
---
@@ -189,6 +188,13 @@ class OutputCommitCoordinatorSuite
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/16558
Fix missing close-parens for In filter's toString
Otherwise the open parentheses isn't closed in query plan descriptions of
batch scans.
PushedFilters: [In(COL_A
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/16281
What are the specific patches to parquet that folks are proposing should be
included in a parquet 1.8.1-spark1 ? Or what would be desired in a
parquet-released 1.8.2 ?
---
If your project is set
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/16092
With a k8s backend on the way I do think it adds a nice organization for
these 3 clearly grouped modules
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/16061
Another external scheduler backend I'm aware of is Two Sigma's scheduler
backend for the system they've created called
[Cook](https://github.com/twosigma/Cook). See
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/15932
Yep that's precisely what I was envisioning. Thanks @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15835#discussion_r87703118
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala
---
@@ -703,6 +705,81 @@ class
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/15486
Typo: form -> from
## What changes were proposed in this pull request?
Minor typo fix
## How was this patch tested?
Existing unit tests on Jenkins
You can merge this p
1 - 100 of 349 matches
Mail list logo