Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15659
You might call some attention to this on dev@, along with a few other big
Python changes, for this reason and in order to see if anyone can review them.
If so, good, if not, then it would highlight
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/15659
Also - I know we are talking about starting to cut the 2.1 branch soon, I
think it would be really good (if its possible) to get this PR in 2.1 so we can
get feedback on the PySpark pip artifacts
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15686
Thank you for merging, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15659
I would have to plead too much ignorance to be able to review or merge
this. Python is not my area. I agree that this is a problem if there's
basically no active Python maintainers. Time to perhaps
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15687
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15715
What about keeping the indent of the div tags but un-indenting only the
code? it might look weird in markdown but does it look nice in the rendered
output?
If it's any more trouble than
Github user mrydzy commented on the issue:
https://github.com/apache/spark/pull/15687
Ok, LocalPi updated and commits squashed. Hope it's fine now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/15659
Thanks @srowen - I figured it was a long shot asking you to review this
(but build related so I figured I'd run it by you). +1 to escalating the Python
maintainer situation.
---
If your project
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/15715
![snip20161101_7](https://cloud.githubusercontent.com/assets/15843379/19896158/469d72b0-a08e-11e6-972f-5706ab4aa4c4.png)
Please see the screenshot above -- this is `flume doc` -- I got this
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/15659
That's a good question @rgbkrk - I think the closest is @davies but he has
been pretty busy lately. I think its pretty clear that Spark is understaffed on
Python maintainers - but it's sort of a
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15618
**[Test build #67903 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67903/consoleFull)**
for PR 15618 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15608
Yup, it seems not even `varargsToStrEnv`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15716
**[Test build #67901 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67901/consoleFull)**
for PR 15716 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15716
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/67901/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15716
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15687
There's one more in `LocalPi.scala`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15716
**[Test build #67901 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67901/consoleFull)**
for PR 15716 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15354
**[Test build #67902 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67902/consoleFull)**
for PR 15354 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15354
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user WeichenXu123 opened a pull request:
https://github.com/apache/spark/pull/15716
[SPARK-18201] add toDense and toSparse into Matrix trait, like Vector design
## What changes were proposed in this pull request?
add toDense and toSparse into Matrix trait, like
Github user mrydzy commented on the issue:
https://github.com/apache/spark/pull/15687
Thank you Sean! I've updated Scala, Java and Python examples and changed
the commit name as you've requested. I can't seem to find the fourth example
though.
---
If your project is set up for it,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15715
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/67899/
Test PASSed.
---
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15715
What about unindenting those blocks? I don't see why they have to be in the
first place. They're HTML tags.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15715
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15708
Hi @ericl, I just happened to come here to track down the flaky tests. I
just wonder if it'd make sense to just ignore this test maybe?
---
If your project is set up for it, you can reply to
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15715
**[Test build #67899 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67899/consoleFull)**
for PR 15715 at commit
Github user LeightonWong commented on the issue:
https://github.com/apache/spark/pull/15643
@mgummelt @srowen 1.6 lacks --conf options, you could add some code to
support --conf, it is useful. so if 1.6 is still maintained I can submit a new
pr or you can fix it yourself, it is
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/15715
@srowen thanks.
I'm afraid the streaming `flume integration doc`, `kinesis integration
doc`, as well as `kafka08 integration doc` also need code highlights, but this
`{% hightlight %}`
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15354
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15354
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/67893/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15354
**[Test build #67893 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67893/consoleFull)**
for PR 15354 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15677
**[Test build #67900 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67900/consoleFull)**
for PR 15677 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15713
It leads to a bigger memory footprint because of the extra elements that
may be allocated but unused in the array? and the performance win is probably
avoiding those branches? I could believe it.
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15715
That is nicer. Are there any other code snippets across the code base that
need this treatment? seems like this is always the right thing to do.
---
If your project is set up for it, you can reply
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15627#discussion_r85948721
--- Diff:
yarn/src/test/scala/org/apache/spark/deploy/yarn/ClientSuite.scala ---
@@ -282,6 +282,37 @@ class ClientSuite extends SparkFunSuite with
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15681
@lw-lin are you persuaded or still feel it's fairly valuable for users? I
guess I'm neutral too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/15715
@koeninger @srowen it'd be great if you could take a look at this too :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15538
Ah right. Let me redirect 18140 to 13127 because all of these end up being
resolved by "upgrade to 1.9"
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/15627#discussion_r85948423
--- Diff:
yarn/src/test/scala/org/apache/spark/deploy/yarn/ClientSuite.scala ---
@@ -282,6 +282,37 @@ class ClientSuite extends SparkFunSuite with
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15715
**[Test build #67899 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67899/consoleFull)**
for PR 15715 at commit
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/15715
Fix the leading spaces is needed, because without the fixing the code
snippet would contain leading spaces as well(see the pic below), which is quite
inconsistent with the other programming guides.
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/15538
I found two such tickets. How should we organize this in Jira?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/15715
[SPARK-18198][Doc][Streaming] Highlight code snippets
## What changes were proposed in this pull request?
We should use `{% highlight lang %}``{% endhighlight %}` to highlight code
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15714
**[Test build #67897 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67897/consoleFull)**
for PR 15714 at commit
Github user koeninger commented on the issue:
https://github.com/apache/spark/pull/15681
I'm agnostic on the value of adding the overload, if @lw-lin thinks it's
more convenient for users. There are considerably fewer overloads as it stands
than the old 0.8 version of KafkaUtils, so
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/8318
For those following along I've made a pip installable PR over at
https://github.com/apache/spark/pull/15659 and I'd appreciate review from a
committer (cc @mateiz / @davies ). That version also
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15713
**[Test build #67898 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67898/consoleFull)**
for PR 15713 at commit
GitHub user a-roberts opened a pull request:
https://github.com/apache/spark/pull/15714
[SPARK-18197] [CORE] Optimise AppendOnlyMap implementation
## What changes were proposed in this pull request?
More details on the JIRA, slight performance increase here by performing
the
GitHub user a-roberts opened a pull request:
https://github.com/apache/spark/pull/15713
[SPARK-18196] [CORE] Optimise CompactBuffer implementation
## What changes were proposed in this pull request?
See the JIRA for details - summary is slightly increased footprint in the
class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15677
**[Test build #67891 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67891/consoleFull)**
for PR 15677 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15681
I see, so it's not an 'oversight'. If you're saying you prefer a Java Map
in this API, is there much value in an overload for a Scala Map? the conversion
is just one method call.
---
If your
Github user koeninger commented on the issue:
https://github.com/apache/spark/pull/15681
I don't think there's a reason to deprecate it. ju.Map is the lowest
common denominator for kafka params, it's used by the underlying consumer,
and it's what the ConsumerStrategy interface
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15681
We would want to change the code so that nothing calls the deprecated
method including tests. I think you can let the deprecated one call the new
one? When to remove it is a question for the future,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15677
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15677
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/67891/
Test FAILed.
---
Github user rgbkrk commented on the issue:
https://github.com/apache/spark/pull/15659
Does Spark have maintainers that work primarily in the Python ecosystem?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/15681
Deprecating the existing one would mean we still need to introduce some
`createRDDInternal`, and let the deprecated one call it; then we can just
remove the deprecated one some time in the future?
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15542
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/15542
Merging to master. Thanks for the review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/1
@eyalfa Can you also rephrase the section `What changes were proposed in
this pull request?` and move the newly added description to this section?
---
If your project is set up for it, you can
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/15669
That's correct, this PR will also fix the yarn-client case. PR title is
updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15681
Should we then deprecate the existing method this is 'replacing', if it's
not going to be removed? the message I get is that the existing one shouldn't
really be used. Otherwise it's hard to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15657
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/67890/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15657
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15657
**[Test build #67890 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67890/consoleFull)**
for PR 15657 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15659
**[Test build #67896 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67896/consoleFull)**
for PR 15659 at commit
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/15659
re-ping @mateiz & @davies and ping @rxin who was involved in review of the
previous PR. Also maybe @felixcheung might have some views being another person
that works on non-JVM languages.
---
If
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15618#discussion_r85934066
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/scheduler/ReceiverTracker.scala
---
@@ -154,6 +147,17 @@ class ReceiverTracker(ssc:
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15659#discussion_r85935215
--- Diff: python/setup.py ---
@@ -0,0 +1,179 @@
+#!/usr/bin/env python
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15637#discussion_r85934978
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/MapAggregate.scala
---
@@ -0,0 +1,332 @@
+/*
+ * Licensed
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11105
**[Test build #67895 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67895/consoleFull)**
for PR 11105 at commit
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/15641
Got it, thanks! I think it also has the problem, let me open a PR for it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15659
**[Test build #67894 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67894/consoleFull)**
for PR 15659 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15354
**[Test build #67893 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67893/consoleFull)**
for PR 15354 at commit
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15637#discussion_r85933282
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/MapAggregate.scala
---
@@ -0,0 +1,324 @@
+/*
+ * Licensed
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/15668
It looks to me that after this PR, we still have literals in expand, can
you call `df.explain(true)` to double check the query plan?
and how about this?
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15687
There are actually 4 instances of the Pi example. I'll merge it if you
update them all, just for reasons of clarity, sure. If you would update the
title to something like `[MINOR] Use <= for clarity
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/15659#discussion_r85933041
--- Diff: dev/create-release/release-build.sh ---
@@ -162,14 +162,35 @@ if [[ "$1" == "package" ]]; then
export ZINC_PORT=$ZINC_PORT
echo
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
(PR description is updated too.) Thank you @srowen.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/15681
Thank you @koeninger !
Please let me cc @srowen who's been around to also take a look~
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15637#discussion_r85931101
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/MapAggregate.scala
---
@@ -0,0 +1,332 @@
+/*
+ * Licensed
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15677
**[Test build #67892 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67892/consoleFull)**
for PR 15677 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15643
Merged to master/2.0. It didn't merge cleanly vs 1.6, not in a way I could
resolve with confidence.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15677
Oh, let me please push another one.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user koeninger commented on the issue:
https://github.com/apache/spark/pull/15681
LGTM, thanks.
If you want to open a separate PR to cleanup the private doc issues you
noticed, go for it, shouldn't need another Jira imho if it isn't changing code
---
If your project
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15643
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15637#discussion_r85930167
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/MapAggregate.scala
---
@@ -0,0 +1,332 @@
+/*
+ * Licensed
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15672
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/67888/
Test PASSed.
---
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15641
Yeah it was in
`sql/core/src/main/scala/org/apache/spark/sql/execution/stat/StatFunctions.scala`
before in 2.0. I'm not sure if it has the same problem or not.
---
If your project is set up for
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15643
**[Test build #3389 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3389/consoleFull)**
for PR 15643 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15672
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15672
**[Test build #67888 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67888/consoleFull)**
for PR 15672 at commit
Github user eyalfa commented on the issue:
https://github.com/apache/spark/pull/1
@viirya , better?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15677
OK, if this is basically the same change we've been reviewing minus the
'arguments' sections, then LGTM. I'll skim once more.
---
If your project is set up for it, you can reply to this email and
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/15641
@srowen Seems branch-2.0 doesn't have related files,
`ApproximatePercentile.scala` is not merged in 2.0?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/1
It is better to update the pr description. It is not clear what the problem
this pr proposes to fix from the first glance.
---
If your project is set up for it, you can reply to this email and have
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15677
**[Test build #67891 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/67891/consoleFull)**
for PR 15677 at commit
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/15669
I think this issue also existed in yarn client mode, files will be added
twice (once with Spark's file server, another with distributed cache), because
both file server and yarn#client could find
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15654
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15641
It has a merge conflict in 2.0, but if you'd like to get it into 2.0 and
could open a similar PR for testing, I can merge that.
---
If your project is set up for it, you can reply to this email and
501 - 600 of 740 matches
Mail list logo