Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-87048930
Closing based on internal discussions
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user mridulm closed the pull request at:
https://github.com/apache/spark/pull/5084
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-86835246
Yeah I agree with Sean and what pretty much everyone else said.
My feeling is that we don't want typical spark users to be relying on
unstable API's, else it cre
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-86254625
I don't think the culture of deciding on changes is different here than in
any other Apache-like project. I don't think it helps to declare this a
"required" change, and I
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-86239409
@srowen
This is not a "wish" - having lead (and leading) multiple efforts which
have been nontrivial use of spark, I do think this is required change : the
abil
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-86228605
I think a lot of that is already clear, and I read this as several vetoes,
which is why it was closed. Sure, another round of discussion, but unless that
changes everyone'
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-86221601
Looks like I was not getting notifications for this PR - so could not
participate in the discussion : sorry for the delay !
There are a few issues to be considere
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-86210096
I think this is a WontFix then and this PR can be closed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. I
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-85610784
I'm also in favor of not exposing this and having users copy these classes
themselves, which should be pretty easy since these files are more-or-less
self-contained.
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-85309719
(Edit: I also thought it was the `util.Utils` object. Please disregard my
latest comment)
---
If your project is set up for it, you can reply to this email and have y
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5084#discussion_r26999555
--- Diff: core/src/main/scala/org/apache/spark/util/collection/Utils.scala
---
@@ -24,7 +26,8 @@ import com.google.common.collect.{Ordering =>
GuavaOrderi
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-84448537
(Oh I also didn't see that this is `collection.Utils`)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If y
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-84447923
I agree with Sean and Sandy here. I don't think we should just expose
internal utilities like this. The collections classes are simple enough that
someone can just copy
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-84375358
I think `Utils` was not meant to be exposed at all and don't know that
Spark should just open up all of that even as experimental. Likewise I'd like
the flexibility to rep
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-84309441
These are not generic scala collections - but specific to using spark at
scale.
Since we already have them in spark core, it is better to expose them as
experimental
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-84060703
Would we ever expect to expose these classes in a stable fashion? Exposing
generic Scala collection functionality seems somewhat orthogonal to Spark's
goals. Unless this
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-83194558
@pwendell Can we merge this into 1.3 as well ? Else we will have to wait
for 1.4 ...
---
If your project is set up for it, you can reply to this email and have your
repl
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-83186762
[Test build #28831 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28831/consoleFull)
for PR 5084 at commit
[`174826c`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-83186823
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/28
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5084#issuecomment-83149293
[Test build #28831 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28831/consoleFull)
for PR 5084 at commit
[`174826c`](https://githu
GitHub user mridulm opened a pull request:
https://github.com/apache/spark/pull/5084
[spark] [SPARK-6168] Expose some of the collection classes as experimental
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/mridulm/spark master
21 matches
Mail list logo