Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14671
@andreweduffy @rxin Maybe I can go for the simple benchmark quickly (maybe
within this weekend) and open a PR to disable Parquet row-by-row filtering if
it makes sense and this can be the
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14620
LGTM - pending Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14279
I will take another look tomorrow. Please feel free to take a look further
because it's almost complete I think.
---
If your project is set up for it, you can reply to this email and have your
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/14687
[SPARK-17107][SQL] Remove redundant pushdown rule for Union
## What changes were proposed in this pull request?
The `Optimizer` rules `PushThroughSetOperations` and `PushDownPredicate`
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14620
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63912/
Test PASSed.
---
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14279
Let me please rebase as it is now different a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/14685
[SPARK-17106][SQL] Simplify the SubqueryExpression interface
## What changes were proposed in this pull request?
The current subquery expression interface contains a little bit of
technical
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14279
**[Test build #63915 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63915/consoleFull)**
for PR 14279 at commit
Github user phalodi closed the pull request at:
https://github.com/apache/spark/pull/14684
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user phalodi commented on the issue:
https://github.com/apache/spark/pull/14684
@srowen ok so i close this PR as you suggested.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user zenglinxi0615 opened a pull request:
https://github.com/apache/spark/pull/14686
[SPARK-16253][SQL] make spark sql compatible with hive sql that usingâ¦
## What changes were proposed in this pull request?
make spark sql compatible with hive sql that using python
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14686
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14684
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14684
**[Test build #63914 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63914/consoleFull)**
for PR 14684 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14683
**[Test build #63913 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63913/consoleFull)**
for PR 14683 at commit
Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/14683
@srowen Here we go. please feel free to let me know your comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user phalodi opened a pull request:
https://github.com/apache/spark/pull/14684
[SPARK-17105][CORE] App name will be random UUID while creating spark
context if it will â¦
## What changes were proposed in this pull request?
App name will be random UUID while
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14683
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63913/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14683
**[Test build #63913 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63913/consoleFull)**
for PR 14683 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14683
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14279
**[Test build #63917 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63917/consoleFull)**
for PR 14279 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14279
**[Test build #63918 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63918/consoleFull)**
for PR 14279 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14667#discussion_r75097036
--- Diff: docs/running-on-mesos.md ---
@@ -207,6 +207,16 @@ The scheduler will start executors round-robin on the
offers Mesos
gives it, but there are
GitHub user GraceH opened a pull request:
https://github.com/apache/spark/pull/14683
[SPARK-16968]Add additional options in jdbc when creating a new table
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
In the PR, we
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14279
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63915/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14279
**[Test build #63915 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63915/consoleFull)**
for PR 14279 at commit
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14685
cc @davies (we discussed this in PR
https://github.com/apache/spark/pull/14548)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14279
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14687
**[Test build #63919 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63919/consoleFull)**
for PR 14687 at commit
Github user andreweduffy commented on the issue:
https://github.com/apache/spark/pull/14671
Yeah benchmarking is definitely a great idea, as it is likely Spark will be
better than Parquet at filtering individual records, but I'm still not quite
understanding why this filter is any
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14688
**[Test build #63920 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63920/consoleFull)**
for PR 14688 at commit
GitHub user jagadeesanas2 opened a pull request:
https://github.com/apache/spark/pull/14688
[SPARK-17095] [Documentation] [Latex and Scala doc do not play nicely]
## What changes were proposed in this pull request?
In Latex, it is common to find "}}}" when closing several
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14683
Hm, this does not look up to date with master. The original changes were
already merged, so I think this would just include your doc changes. Maybe
squash your commits, rebase on master?
---
If
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14684
**[Test build #63914 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63914/consoleFull)**
for PR 14684 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14682
**[Test build #63911 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63911/consoleFull)**
for PR 14682 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14684
This is a duplicate of SPARK-16966, pretty much. I don't think this
behavior here should be changed. See the fix for SPARK-16966 and discussion for
more about the intended behavior.
---
If your
Github user phalodi commented on the issue:
https://github.com/apache/spark/pull/14684
@srowen Yes you are right but SPARK-16966 is only for spark session but
while user creating the spark context then they must be give app name. i think
for both spark session and spark context the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14620
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14620
**[Test build #63912 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63912/consoleFull)**
for PR 14620 at commit
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14676#discussion_r75110862
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveInlineTables.scala
---
@@ -0,0 +1,109 @@
+/*
+ * Licensed to
Github user andreweduffy commented on the issue:
https://github.com/apache/spark/pull/14671
That is true, but currently all filters are being pushed down to row-by-row
anyway when not using the vectorized reader, so I'm unclear why the IN filter
is special
---
If your project is
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14650
Seems OK to me, to the limits of my understanding, and given the logic of
https://github.com/apache/spark/pull/14650#issuecomment-240057336
---
If your project is set up for it, you can reply to
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14643#discussion_r75096884
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/ProbabilisticClassifier.scala
---
@@ -201,11 +201,18 @@ abstract class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14620
**[Test build #63912 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63912/consoleFull)**
for PR 14620 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14559
@GraceH yeah you'll need a new PR. You can open it vs the same JIRA since
these are fairly tightly related.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14676#discussion_r75110610
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveInlineTables.scala
---
@@ -0,0 +1,109 @@
+/*
+ * Licensed to
Github user skonto commented on a diff in the pull request:
https://github.com/apache/spark/pull/14667#discussion_r75110761
--- Diff: docs/running-on-mesos.md ---
@@ -207,6 +207,16 @@ The scheduler will start executors round-robin on the
offers Mesos
gives it, but there are
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14620
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14688
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63920/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14688
**[Test build #63920 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63920/consoleFull)**
for PR 14688 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14688
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14682
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14682
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63911/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14279
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63917/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14279
**[Test build #63917 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63917/consoleFull)**
for PR 14279 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14685
**[Test build #63916 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63916/consoleFull)**
for PR 14685 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14279
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14685#discussion_r75106052
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/subquery.scala ---
@@ -56,30 +44,29 @@ trait ExecSubqueryExpression extends
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14620
Merging to master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14690
**[Test build #63931 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63931/consoleFull)**
for PR 14690 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14690
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63931/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14690
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14279
Do we need a separate setting for time format?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/12753
Hi @devaraj-kavali, are you still interested in updating this PR? I really
think it should use the features I added in SPARK-16671 (especially the
`ConfigReader` code), to avoid yet another way of
GitHub user holdenk opened a pull request:
https://github.com/apache/spark/pull/14691
[SPARK-16407][STREAMING] Allow users to supply custom streamsink provider
## What changes were proposed in this pull request?
This change allows the user to supply a specific instance of a
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14689
Oh, I missed multi column cases. I'll fix soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14279#discussion_r75195100
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -62,6 +64,11 @@ object DateTimeUtils {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14691
**[Test build #63934 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63934/consoleFull)**
for PR 14691 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14279#discussion_r75196399
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVRelation.scala
---
@@ -204,18 +213,50 @@ private[csv] class
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14279#discussion_r75196517
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JSONOptions.scala
---
@@ -53,6 +55,12 @@ private[sql] class JSONOptions(
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/12753
@vanzin, SPARK-3767 was resolved as 'Won't Fix' by @srowen. I was in
assumption that SPARK-16671 covers this as well.
---
If your project is set up for it, you can reply to this email and
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14613#discussion_r75114001
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -668,6 +668,15 @@ test_that("collect() returns a data.frame", {
df <-
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14613#discussion_r75115733
--- Diff: R/pkg/R/DataFrame.R ---
@@ -392,7 +392,11 @@ setMethod("coltypes",
}
if (is.null(type)) {
-
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/14371
..rebased the patch against master; addressed @vanzin's comments. the
`mkdirs()` change in `HDFSBackedStateStoreProvider` done after reviewing code
in Hadoop, esp. HDFS and RawLocal. When the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14685
**[Test build #63916 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63916/consoleFull)**
for PR 14685 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14685
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63916/
Test PASSed.
---
Github user yucai commented on the issue:
https://github.com/apache/spark/pull/14481
Generated code example, **not for code view yet**.
```
scala> Seq(("a", "10"), ("b", "1"), ("b", "2"), ("c", "5"), ("c", "3")).
| toDF("k",
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/14673#discussion_r75119974
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -141,6 +141,7 @@ private[spark] object SparkUI {
val DEFAULT_POOL_NAME =
Github user yucai commented on the issue:
https://github.com/apache/spark/pull/14481
@chenghao-intel Hao, kindly take a look at.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14688
**[Test build #63922 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63922/consoleFull)**
for PR 14688 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14688
**[Test build #63922 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63922/consoleFull)**
for PR 14688 at commit
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14558#discussion_r75123349
--- Diff: R/pkg/R/mllib.R ---
@@ -602,14 +599,14 @@ setMethod("spark.survreg", signature(data =
"SparkDataFrame", formula = "formula
# Returns a
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/14673#discussion_r75126029
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -93,6 +93,8 @@ class JobProgressListener(conf: SparkConf) extends
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/14371#discussion_r75114636
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/HDFSBackedStateStoreProvider.scala
---
@@ -278,14 +278,15 @@
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14613#discussion_r75115897
--- Diff: R/pkg/R/DataFrame.R ---
@@ -392,7 +392,11 @@ setMethod("coltypes",
}
if (is.null(type)) {
-
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14229#discussion_r75120758
--- Diff: R/pkg/R/mllib.R ---
@@ -299,6 +306,94 @@ setMethod("summary", signature(object =
"NaiveBayesModel"),
return(list(apriori =
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/14673#discussion_r75121902
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -93,6 +93,8 @@ class JobProgressListener(conf: SparkConf) extends
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14392#discussion_r75122731
--- Diff: R/pkg/R/mllib.R ---
@@ -632,3 +660,110 @@ setMethod("predict", signature(object =
"AFTSurvivalRegressionModel"),
Github user yanboliang commented on a diff in the pull request:
https://github.com/apache/spark/pull/14392#discussion_r75123679
--- Diff: R/pkg/R/mllib.R ---
@@ -632,3 +660,110 @@ setMethod("predict", signature(object =
"AFTSurvivalRegressionModel"),
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r75066449
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -144,16 +161,147 @@ private[spark] class
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r75066432
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -144,16 +161,147 @@ private[spark] class
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14678
cc @rxin, Could you check if this make sense please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14678#discussion_r75066526
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -322,11 +322,6 @@ object SQLConf {
.intConf
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/13704
ping @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14676
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/14680#discussion_r75073488
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/text/TextFileFormat.scala
---
@@ -40,6 +40,8 @@ class TextFileFormat
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14678
**[Test build #63899 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63899/consoleFull)**
for PR 14678 at commit
Github user GraceH commented on the issue:
https://github.com/apache/spark/pull/14559
Hi @srowen @rxin , sorry for late response. I have added the document part.
https://github.com/GraceH/spark/commit/8360c2911b70aa628f8edba593e3764d3b07ca55
Shall I raise a new PR?
---
If your
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14667#discussion_r75116641
--- Diff: docs/running-on-mesos.md ---
@@ -207,6 +207,16 @@ The scheduler will start executors round-robin on the
offers Mesos
gives it, but there are
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14371
**[Test build #63921 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63921/consoleFull)**
for PR 14371 at commit
1 - 100 of 736 matches
Mail list logo