Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/11178#discussion_r53119771
--- Diff: dev/run-tests.py ---
@@ -336,7 +336,6 @@ def build_spark_sbt(hadoop_version):
# Enable all of the profiles for the build
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/11178#discussion_r53119369
--- Diff: dev/run-tests.py ---
@@ -336,7 +336,6 @@ def build_spark_sbt(hadoop_version):
# Enable all of the profiles for the build
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/11178#issuecomment-184993594
@JoshRosen I am sorry for the delay here, I will try to do it today itself.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/11139#discussion_r52418861
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/api/java/JavaDStreamLike.scala
---
@@ -284,50 +264,6 @@ trait JavaDStreamLike[T, This
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5#issuecomment-181716006
@JoshRosen Thanks for taking a look. I will update it, did you also
consider renaming - or may be just updating the doc suffices ?
---
If your project is set up
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5#issuecomment-181369924
I have left-out the renaming for now, not sure about it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/5
[SPARK-13231] Make count failed values a user facing API.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ScrapCodes/spark 13231
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/10892#issuecomment-176122545
@CodingCat and @zsxwing Thanks for the PR and letting me take a look, I am
a bit unsure about having these as API methods. I want to know what others
opinion is on
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/10892#discussion_r51106802
--- Diff:
external/akka/src/main/scala/org/apache/spark/streaming/akka/ActorReceiver.scala
---
@@ -90,6 +90,48 @@ object ActorReceiver
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/10892#discussion_r51083142
--- Diff:
external/akka/src/main/scala/org/apache/spark/streaming/akka/ActorReceiver.scala
---
@@ -202,12 +294,11 @@ private[akka] case class
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/10292#issuecomment-166265274
Hi @jacek-lewandowski, thanks for the patch !. Are you using the custom
mode ? Or did you try your patch against a custom deploy mode ?
---
If your project is set
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/10223#issuecomment-163606752
hm.. I will think more.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/10223
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/10223
[SPARK-12238][STREAMING][DOCS] s/Advanced Sources/External Sources in docs.
While reading the streaming docs, I felt reading as external
sources(instead of Advanced sources) seemed more
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/10014#issuecomment-160587382
Thanks @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/10014#issuecomment-160583952
According to the SO post this warning is added in recent versions of
assembly plugin. So I think it was not a problem before, even now it just fixes
that warning
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/10014
[SPARK-12023][BUILD] Fix warnings while packaging spark with maven.
this is a trivial fix, discussed
[here](http://stackoverflow.com/questions/28500401/maven-assembly-plugin-warning-the
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/10012
[MINOR][BUILD] Changed the comment to reflect the plugin project is there
to support SBT pom reader only.
You can merge this pull request into a Git repository by running:
$ git pull
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/9825#discussion_r45456679
--- Diff:
repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkIMain.scala ---
@@ -1221,10 +1221,16 @@ import
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/9825#discussion_r45456618
--- Diff:
repl/scala-2.10/src/test/scala/org/apache/spark/repl/ReplSuite.scala ---
@@ -278,6 +281,27 @@ class ReplSuite extends SparkFunSuite
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/9825#discussion_r45331355
--- Diff:
repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkIMain.scala ---
@@ -1221,10 +1221,16 @@ import
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/9825#discussion_r45331187
--- Diff:
repl/scala-2.10/src/test/scala/org/apache/spark/repl/ReplSuite.scala ---
@@ -278,6 +281,27 @@ class ReplSuite extends SparkFunSuite
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/9824#discussion_r45329272
--- Diff: repl/scala-2.11/src/main/scala/org/apache/spark/repl/Main.scala
---
@@ -43,10 +39,20 @@ object Main extends Logging {
def main(args
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6841#issuecomment-148664197
Hey @tdas, maybe I can take a look at this PR, given you are inclined
towards moving AkkaUtils to an external project ?
---
If your project is set up for it, you
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-123246180
Yeah, Looks good !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/7375#discussion_r34537727
--- Diff: pom.xml ---
@@ -1423,34 +1452,12 @@
test
- ${test_classpath_file
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6903#issuecomment-119558510
Yes, it can be. But enforcing it codebase wide is huge(+ unjustified, in
the sense it affect code history) effort.
See :
https://github.com/scalastyle
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-119153050
I was also inclined towards having an echo (Like the one in script above.).
It gives the user necessary feedback, as to what changed.
---
If your project is set up
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6903#issuecomment-117959727
Instead of breaking, let's skip the genjavadoc for scala 2.11 in SBT. I
hope that should be doable, @dragos ?
---
If your project is set up for it, you can rep
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6903#issuecomment-117438365
Yes, I am sort of clear that they are unrelated failures. About Mima checks
they should pick the right version artifact according to scala version. I did
not get a
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6903#issuecomment-117088274
You can may be do it with maven. That does not do mima checks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6903#issuecomment-117069955
Are you able to run tests on scala 2.11 build locally. I am have tough time
with spurious failures. :(
---
If your project is set up for it, you can reply to this
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-115637219
Yes it looks simple, If you think this suggestion work for both gnu and bsd
systems. Can we update the patch accordingly ?
---
If your project is set up for it, you
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-115530179
@srowen Did you get a chance to try the above out ? did it work ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6903#issuecomment-115248670
I am sure @pwendell or @andrewor14 can help.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-115224669
Try this as `dev/change-version-to-2.11.sh` on OSX with bsd sed.
```bash
# Note that this will not necessarily work as intended with non-GNU sed
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-115210837
BTW: Is there a way to report spam in github ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-115210521
@srowen I tried it, all we need to do is drop the `-i` flag, which in-place
replaces the file. Instead we need to probably redirect the stream by correctly
setting
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6903#issuecomment-115106553
Can you update to scala 2.11.7 as part of this ? It is just a flip in
version numbers in pom.xml
---
If your project is set up for it, you can reply to this email
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6903#issuecomment-115106575
BTW http://scala-lang.org/news/2.11.7
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6903#issuecomment-114372971
IMHO this can not be merged before 2.11.7 is released.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/6903#discussion_r33011966
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -7,82 +7,44 @@ package scala
package tools.nsc
--- End
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/6903#discussion_r33011916
--- Diff: repl/scala-2.11/src/main/scala/org/apache/spark/repl/Main.scala
---
@@ -32,7 +34,9 @@ object Main extends Logging {
val outputDir
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/6903#discussion_r33011852
--- Diff:
repl/scala-2.11/src/test/scala/org/apache/spark/repl/ReplSuite.scala ---
@@ -274,7 +268,7 @@ class ReplSuite extends SparkFunSuite
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6903#issuecomment-114372083
We can change the License file too, and add this too RAT checks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/6903#discussion_r33011764
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -98,875 +60,39 @@ class SparkILoop(in0: Option[BufferedReader
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/6903#discussion_r32906350
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -98,875 +60,39 @@ class SparkILoop(in0: Option[BufferedReader
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6903#issuecomment-113873736
Thanks for putting this together, I am going to take a more closer look.
Finally we will be relieved of burden of maintaining and porting repl per
release
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/6903#discussion_r32891497
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -98,875 +60,39 @@ class SparkILoop(in0: Option[BufferedReader
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-113498883
I am going to give it a shot, if you allow me some time.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-113480419
I am inclined towards having sed replacement command which is compatible
with POSIX (works both on bsd and GNU). I do not have a BSD system handy right
now. :/ to
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6832#issuecomment-112308999
I am Ok with 1.
Is there a reason we should restrict ourselves to GNU SED ? I suppose our
scripts were posix complaint.
---
If your project is set up for
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2158#issuecomment-101653305
I am closing this for now. Until this becomes a highly critical fix it is
not worth looking at maybe.
---
If your project is set up for it, you can reply to this
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/2158
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5977#issuecomment-100197430
Yes, if you think they are all exact same patch.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5977#issuecomment-100183450
Side Note: Unless it is a non trivial merge, our merge scripts takes care
of merging to other branches automatically. So we don't need separate PRs for
same
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5977#issuecomment-100183221
ahh makes sense then. Thanks a lot @jacek-lewandowski !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5977#issuecomment-100175242
@jacek-lewandowski Can you please close other Pull requests, with same
issue id and are irrelevant.
---
If your project is set up for it, you can reply to this
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5977#issuecomment-100170256
I am sorry, I was in an assumption we are religious about it. A quick
search across the codebase proved my wrong. Matchers are widely used in at
least recently added
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5977#issuecomment-100126071
Yes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5977#issuecomment-100103061
This is what I mentioned about.
https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide#SparkCodeStyleGuide-InfixMethods
---
If your project is set
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5977#issuecomment-100102939
@JoshRosen Thanks for pointing out.
Until now I was in the impression of x.getClass is same as classOf[x].
```scala
scala> clas
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3551#issuecomment-100096192
From recent activity, I have seen few commits going in that show our intent
of moving away from Akka to our own spark owned solution for dealing with
transport
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5832#issuecomment-98069844
Hi @nirandaperera, can you create the jira with the issue you are fixing
here. That way we would know why this change was done.
---
If your project is set up for it
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/5662#discussion_r29041655
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkIMain.scala ---
@@ -1129,7 +1129,7 @@ class SparkIMain(@BeanProperty val factory
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/5662#discussion_r29022546
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkIMain.scala ---
@@ -1129,7 +1129,7 @@ class SparkIMain(@BeanProperty val factory
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/5662
[SPARK-7092] Update spark scala version to 2.11.6
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ScrapCodes/spark-1
SPARK-7092/scala
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5662#issuecomment-95573076
@pwendell Please take a look :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/5652
[HOTFIX][sql] Fix compilation for scala 2.11.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ScrapCodes/spark-1
hf/compilation-fix-scala
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5593#issuecomment-94447161
@mengxr and @marmbrus PTAL.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/5593
[SPARK-7011] Build(compilation) fails with scala 2.11 option, because a
protected[sql] type is accessed in ml package.
You can merge this pull request into a Git repository by running
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5354#issuecomment-94432612
hm.. your approach sounds good to me, we should really make sure we have
the same version of a library across dependent projects.
---
If your project is set up for
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5354#issuecomment-94431123
@srowen I am not sure I understand, where did you see hardcoded versions
for httpclient/httpcore ?
---
If your project is set up for it, you can reply to this email
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5421#issuecomment-91199313
You are right, that check is just redundant ! + This PR LGTM given the fix
that @srowen suggested is done.
---
If your project is set up for it, you can reply to
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5402#issuecomment-90807341
(Though it is redundant to say. :) ) LGTM too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/4248
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4876#issuecomment-77169399
Based on your comments and this issue I am going to take a deeper look
tomorrow again. Meanwhile if you are rushed with respect to this PR. Looks like
a good
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4876#issuecomment-77168087
Hmm.. I was trusting our effective pom trick was working properly. Forgot
the JIRA id for that though.
On Mar 4, 2015 6:10 PM, "Sean Owen&qu
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4876#issuecomment-77097244
I am not a sed wizard, was just going through the man page. It does not say
anything about -e being POSIX or non POSIX - sadly!. I do not have a mac, so no
clues
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/4876#discussion_r25752030
--- Diff: dev/change-version-to-2.10.sh ---
@@ -17,4 +17,7 @@
# limitations under the License.
#
find . -name 'pom.xml' | grep
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4512#issuecomment-75489848
Looks good !, haven't tested it though.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4512#issuecomment-73834940
I agree, the external actor system would need the same configuration, I
will let you take the call.
Prashant Sharma
On Wed, Feb 11, 2015
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4512#issuecomment-73833899
yes, I think in the meantime it is worth doing it - considering we would be
saving on a useless thread. And it is not a lot of effort either (hopefully ?).
What do
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/4512#discussion_r24474315
--- Diff: core/src/main/scala/org/apache/spark/util/AkkaUtils.scala ---
@@ -106,7 +104,6 @@ private[spark] object AkkaUtils extends Logging
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4512#issuecomment-73833260
Ahh looks like the threshold property is gone, I think the most appropriate
solution is to provide empty implementation under this property
`implementation-class
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/4512#discussion_r24473919
--- Diff: core/src/main/scala/org/apache/spark/util/AkkaUtils.scala ---
@@ -106,7 +104,6 @@ private[spark] object AkkaUtils extends Logging
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/4264
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4264#issuecomment-72600510
I can not get these tests to run, not sure if there is a way to run them
that I do not know. Please see jira for related comment.
---
If your project is set up for
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/4248#discussion_r23831675
--- Diff: project/SparkBuild.scala ---
@@ -381,7 +381,9 @@ object TestSettings {
javaOptions in Test += "-Dspark.ui.enabled=
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4257#issuecomment-71972197
You can print the same details by calling rdd.toDebugString in your program
?
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/4264
[SPARK-5475][CORE] Java 8 tests are like maintenance overhead.
Having tests that validate the same code compatible with java 8 and java 7
is like asserting that java 8 is backward compatible
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4248#issuecomment-7199
I was confused here whether to use manual clock or system clock ? Since
system clock made the test considerably faster. P.S. Test failures seems to
be unrelated
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4248#issuecomment-71966684
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/4248
[SPARK-3872][streaming], Rewrite the test for Actor as receiver in spark
streaming.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4232#issuecomment-71787984
I am curious as to what is the benefit of this change ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4043#issuecomment-70453703
@pwendell - patch updated to latest master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/4043#discussion_r23074849
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllStagesPage.scala
---
@@ -37,12 +37,18 @@ private[ui] class AllStagesPage(parent: StagesTab
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/4043#discussion_r23068344
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllStagesPage.scala
---
@@ -37,12 +37,18 @@ private[ui] class AllStagesPage(parent: StagesTab
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4043#issuecomment-70220772
LinkedHashMap was introduced to maintain the insertion order of
stagesIds(Hasmap will lead to arbitrary order), all map(s) in scala has return
an `Option` on `remove
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4043#issuecomment-70219564
Good point, in map in scala remove returns an Option and does not throw
exception. However in lists what you said holds. See
https://github.com/scala/scala/blob/2.11
201 - 300 of 719 matches
Mail list logo