Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/110#issuecomment-37121987
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/106#discussion_r10414915
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDD.scala ---
@@ -135,7 +135,11 @@ class JavaRDD[T](val rdd: RDD[T])(implicit val
classTag:
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/99
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/106#issuecomment-37133978
Just had a couple questions essentially related to our deprecation policy.
Change looks good to me otherwise. Thanks for the cleanup!
---
If your project is set up for
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/106#discussion_r10414992
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDD.scala ---
@@ -135,7 +135,11 @@ class JavaRDD[T](val rdd: RDD[T])(implicit val
classTag:
GitHub user pwendell opened a pull request:
https://github.com/apache/spark/pull/111
Fix markup errors introduced in #33 (SPARK-1189)
These were causing errors on the configuration page.
You can merge this pull request into a Git repository by running:
$ git pull
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/98
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/111
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/106#issuecomment-37136132
As per our offline discussion let's either just remove the existing
function calls (which is okay since this is a 1.0 release) or continue to
support them with
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/100#issuecomment-37136248
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13083/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/100#issuecomment-37136247
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/108#issuecomment-37136294
Will this require the spark-ec2 scripts to pull in this dependency as well?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/108#issuecomment-37137320
@aarondav the spark-ec2 scripts set up a ganglia cluster separately... they
actually don't use the `GangliaSink` in Spark at present. If change spark ec2
to use that
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/106#issuecomment-37147113
I'd say just remove them -- this is a pretty small feature and easy to work
around if anyone used it.
---
If your project is set up for it, you can reply to this email
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/106#discussion_r10417424
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -1031,8 +1026,10 @@ abstract class RDD[T: ClassTag](
private var
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/101#issuecomment-37150695
anyone would like to review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/112#issuecomment-37152085
/cc @joshrosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/113
Upgrade Jetty to 9.1.3.v20140225.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/rxin/spark jetty9
Alternatively you can review and apply these
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37152830
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37154482
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13084/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/112#issuecomment-37154483
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13085/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/114#issuecomment-37154497
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/102#issuecomment-37154500
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/102#issuecomment-37154499
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/106#issuecomment-37154596
@mateiz @aarondav This update simply removes the `generator` setters and
changes the naming as per Matei's suggestion.
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/106#issuecomment-37154609
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/106#issuecomment-37154608
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user kellrott commented on the pull request:
https://github.com/apache/spark/pull/109#issuecomment-37154745
That would be enough for my usage. Wasn't aware of the '-i' flag, it
doesn't show up in the '--help' and wasn't mentioned when I asked about this
feature in spark-users
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/108#issuecomment-37154826
Hey @mateiz updated the docs.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/109#issuecomment-37154936
What if instead we added to the help docs the `-i` and `-e` flag from the
scala repl? I don't think there is any way for the user to see these now. We
should make sure
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/99#issuecomment-37155320
I'm used to thinking of Spark standalone clusters as having 4 different
types of JVMs -- 1 driver, 1 master, N workers, and M executors. Does
SPARK_DAEMON_MEMORY control
31 matches
Mail list logo