Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13439
So will it be more practice to benchmark the case in which there are some
constant and some not constant column vectors are used together? And compare it
with the original case in which all columns
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13439
I see. My question is, as for example we create 2 column vectors, one is
constant and one is not. Because we will not re-use the column vectors, so
their constant flag is fixed and not changed. As
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13439
What I meant is that if in one process you have some invocation of the
function that would hit the true branch, and some other invocation of the
function that would hit the false branch, the
Github user squito closed the pull request at:
https://github.com/apache/spark/pull/13548
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13548
**[Test build #60150 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60150/consoleFull)**
for PR 13548 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13548
**[Test build #3069 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3069/consoleFull)**
for PR 13548 at commit
GitHub user squito reopened a pull request:
https://github.com/apache/spark/pull/13548
[DO NOT MERGE] lots of blacklist testing
making jenkins run the scheduler tests a lot
You can merge this pull request into a Git repository by running:
$ git pull
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Thank you for the quick responses @sun-rui and @shivaram .
Here is how the `dataframe.queyExection.toString` printout starts with:
== Parsed Logical Plan ==
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13413
**[Test build #60149 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60149/consoleFull)**
for PR 13413 at commit
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13413
@maropu Thanks for the review, addressed all the comments
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user techaddict commented on a diff in the pull request:
https://github.com/apache/spark/pull/13413#discussion_r66192955
--- Diff: python/pyspark/sql/tests.py ---
@@ -1481,17 +1481,7 @@ def test_list_functions(self):
spark.sql("CREATE DATABASE some_db")
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13439
Besides, I just wrote this test according to other tests in
`ColumnarBatchBenchmark` that benchmark on-heap, off-heap column vector access.
I was thinking it might be enough. If not, any else need
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13439
hmm, but as the flag is set, I think it will not be changed?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user koertkuipers commented on the issue:
https://github.com/apache/spark/pull/13526
could we "rewind"/undo the append for the key and change it to a map that
inserts new values and key? so remove one append and replace it with another
operation?
---
If your project is set
Github user ShreyasFadnavis closed the pull request at:
https://github.com/apache/spark/pull/13547
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user kamalcoursera commented on the issue:
https://github.com/apache/spark/pull/10706
Hi Davies,
Could you please shed more light on the status of correlated but non-scalar
subquery in Spark 2.0 release. Appreciate if you can summarize any other
restrictions, if any.
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13439
I am not sure if you are really testing it correctly -- your benchmark is
mostly likely just testing how well the CPU does branch prediction when the
flag is always true or false.
---
If your
Github user koertkuipers commented on the issue:
https://github.com/apache/spark/pull/13526
the tricky part with that is that (ds: Dataset[(K,
V)]).groupBy(_._1).mapValues(_._2) should return a
KeyValueGroupedDataset[K, V]
On Tue, Jun 7, 2016 at 8:22 PM, Wenchen Fan
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13549
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13549
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60148/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13549
**[Test build #60148 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60148/consoleFull)**
for PR 13549 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13552
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user peterableda opened a pull request:
https://github.com/apache/spark/pull/13552
[SPARK-15813] Use past tense for the cancel container request message
## What changes were proposed in this pull request?
Use past tense for the cancel container request message as it is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13543
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13543
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60146/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13543
**[Test build #60146 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60146/consoleFull)**
for PR 13543 at commit
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13550
@marymwu this has been fixed in
https://github.com/apache/spark/commit/09b3c56c91831b3e8d909521b8f3ffbce4eb0395.
Could you close this PR?
---
If your project is set up for it, you can
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13545
What do you think `dropDuplicates`?
1. ds.select("_1", "_2", "_3").dropDuplicates(Seq("_1",
"_2")).orderBy("_1", "_2").show()
2. ds.select("_1", "_2", "_3").dropDuplicates("_1",
Github user AllenShi closed the pull request at:
https://github.com/apache/spark/pull/13551
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user AllenShi opened a pull request:
https://github.com/apache/spark/pull/13551
merge original repository
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this patch tested?
(Please
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13548
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13548
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60138/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13548
**[Test build #60138 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60138/consoleFull)**
for PR 13548 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/13550
It would be nicer if this PR follows
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark and has
a test.
---
If your project is set up for it, you can reply to this email
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13550
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13371
cc @rxin Can you also take a look of this? This is staying for a while too.
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
GitHub user marymwu opened a pull request:
https://github.com/apache/spark/pull/13550
SPARK-15755
JIRA Issue: https://issues.apache.org/jira/browse/SPARK-15755
java.lang.NullPointerException when run spark 2.0 setting
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13439
@rxin hmm, I just think if we can improve it by just adding conditional
check, it might be worth doing.
For the performance hurt, this is benchmark for on-heap and off-heap column
vectors
Github user zhonghaihua commented on the issue:
https://github.com/apache/spark/pull/12258
@vanzin my JIRA username is `iward`. Thanks a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13549
**[Test build #60148 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60148/consoleFull)**
for PR 13549 at commit
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/13549
@marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/13549#discussion_r66182722
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
---
@@ -123,27 +159,6 @@ object
GitHub user tdas opened a pull request:
https://github.com/apache/spark/pull/13549
Added support for sorting after streaming aggregation with complete mode
## What changes were proposed in this pull request?
When the output mode is complete, then the output of a streaming
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13544
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60147/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13544
**[Test build #60147 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60147/consoleFull)**
for PR 13544 at commit
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/13394#discussion_r66182476
--- Diff: R/pkg/R/mllib.R ---
@@ -197,11 +197,10 @@ print.summary.GeneralizedLinearRegressionModel <-
function(x, ...) {
invisible(x)
}
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13544
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13439
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13439
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60141/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13439
**[Test build #60141 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60141/consoleFull)**
for PR 13439 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13540
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60145/
Test PASSed.
---
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13300
@pjfanning we are now focusing on bug fixes and stability fixes rather than
adding new features.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13540
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13540
**[Test build #60145 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60145/consoleFull)**
for PR 13540 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13545
For API design it would be better to be very conservative, because we
cannot remove APIs. There is always value in adding something, but there is
also a cost to maintenance and user experience (too
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13542
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60144/
Test PASSed.
---
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13439
@viirya this is still a pretty major change for unclear benefits. There
might be other more important things that need more eyes on...
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13542
**[Test build #60144 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60144/consoleFull)**
for PR 13542 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13542
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13545#discussion_r66181659
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2262,6 +2275,19 @@ class Dataset[T] private[sql](
def distinct(): Dataset[T]
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13439
Wouldn't this hurt performance even more due to the extra branch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13544
**[Test build #60147 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60147/consoleFull)**
for PR 13544 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13439
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13439
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60140/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13544
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60143/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13544
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13439
**[Test build #60140 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60140/consoleFull)**
for PR 13439 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13544
**[Test build #60143 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60143/consoleFull)**
for PR 13544 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13543
**[Test build #60146 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60146/consoleFull)**
for PR 13543 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13540
**[Test build #60145 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60145/consoleFull)**
for PR 13540 at commit
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/13540
Thanks @BryanCutler @MechCoder @MLnick for the review. I just update the PR
to make it as property. Regarding the pyspark docs, I think there's umbrella
jira to parity scala mllib and pyspark
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/13544
@rxin
a small problem:
in `HiveContext` there is a method `refreshTable` for refreshing metadata
of Hive table.
now using new SparkSession API with hive support, the method is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13544
**[Test build #60143 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60143/consoleFull)**
for PR 13544 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13542
**[Test build #60144 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60144/consoleFull)**
for PR 13542 at commit
Github user MLnick commented on a diff in the pull request:
https://github.com/apache/spark/pull/12938#discussion_r66177599
--- Diff: python/pyspark/ml/classification.py ---
@@ -183,7 +191,7 @@ def getThresholds(self):
If :py:attr:`thresholds` is set, return its value.
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13189
Seems it is fine to not have metrics when we use hiveResultString.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/13394#discussion_r66177097
--- Diff: R/pkg/R/mllib.R ---
@@ -197,11 +197,10 @@ print.summary.GeneralizedLinearRegressionModel <-
function(x, ...) {
invisible(x)
}
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/12938
**[Test build #60139 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60139/consoleFull)**
for PR 12938 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/12938
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60139/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/12938
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/13526
A possible approach maybe just keep the function given by `mapValues`, and
apply it before calling the function given by `mapGroups`. By doing this, we at
least won't make the performance worse,
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/12824
@tgravescs the problem is this code in Client.scala:
sparkConf.set(TOKEN_RENEWAL_INTERVAL, renewalInterval)
That will write the value to the config with the `ms` suffix. I think
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13439
@rxin I've updated this to more simple approach that doesn't introduce new
classes. The main change is to check if the current vector is constant or not
and do suitable data access. Please take a
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13543
**[Test build #60142 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60142/consoleFull)**
for PR 13543 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13543
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60142/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13543
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/13534
LGTM, merging to master and 2.0, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13543
**[Test build #60142 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60142/consoleFull)**
for PR 13543 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/13189
`QueryExecution.hiveResultString` will call `SparkPlan.executeCollect`
without setting an execution id. This method is only used in test, should we
just stop reporting metrics for this case, or
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/13439#discussion_r66174085
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/OnHeapColumnVector.java
---
@@ -70,26 +71,106 @@ public long nullsNativeAddress()
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13534
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13439
**[Test build #60141 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60141/consoleFull)**
for PR 13439 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13439
The latest benchmark is run individually for each type of column vector. As
stated in `ColumnarBatchBenchmark`, it is hard to reason about the JIT. If we
put these 4 cases together to run benchmark,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13439
**[Test build #60140 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60140/consoleFull)**
for PR 13439 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13439
Benchmark again on new change:
Environment:
Java HotSpot(TM) 64-Bit Server VM 1.8.0_71-b15 on Linux
3.19.0-25-generic
Intel(R) Core(TM) i7-5557U CPU @ 3.10GHz
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/13495
\cc @yanboliang
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/13530
@dhruve could you close the PR? The bot doesn't do it automatically for
backports. thx
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/9207
So I guess I'm wondering what our plans for PMML look like - I'm happy to
update this or go in the direction @MLnick suggested if thats what we want?
---
If your project is set up for it, you can
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/12938
**[Test build #60139 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60139/consoleFull)**
for PR 12938 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13335
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
1 - 100 of 320 matches
Mail list logo