GitHub user sryza opened a pull request:
https://github.com/apache/spark/pull/1461
SPARK-2553. CoGroupedRDD unnecessarily allocates a Tuple2 per dependency...
... per key
My humble opinion is that avoiding allocations in this performance-critical
section is worth the extra
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1361#issuecomment-49272423
QA tests have started for PR 1361. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16774/consoleFull
---
If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1447#issuecomment-49272425
QA tests have started for PR 1447. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16773/consoleFull
---
If
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1361#issuecomment-49272169
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1361#issuecomment-49272156
Jenkins, add to whitelist.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1460#issuecomment-49271938
QA tests have started for PR 1460. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16772/consoleFull
---
If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1262#issuecomment-49271814
QA results for PR 1262:- This patch PASSES unit tests.- This patch
merges cleanly- This patch adds no public classesFor more
information see test
ouptut:https://amplab.c
Github user mengxr closed the pull request at:
https://github.com/apache/spark/pull/1459
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1459#issuecomment-49271772
Merged into branch-0.9.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user davies opened a pull request:
https://github.com/apache/spark/pull/1460
[SPARK-2538] [PySpark] Hash based disk spilling aggregation
During aggregation in Python worker, if the memory usage is above
spark.executor.memory, it will do disk spilling aggregation.
It
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1349
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1349#issuecomment-49271133
Thanks Andrew, looks good!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/1399#discussion_r15045195
--- Diff: sbin/start-thriftserver.sh ---
@@ -0,0 +1,24 @@
+#!/usr/bin/env bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1445
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1337
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1337#issuecomment-49270261
Okay I merged this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1450#issuecomment-49269144
I created a JIRA to deal with this and did some initial exploration, but I
think I'll need to wait for Prashant to actually do it:
https://issues.apache.org/jira
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1459#issuecomment-49267990
QA results for PR 1459:- This patch PASSES unit tests.- This patch
merges cleanly- This patch adds no public classesFor more
information see test
ouptut:https://amplab.c
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1447#issuecomment-49267874
QA tests have started for PR 1447. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16771/consoleFull
---
If
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1447#discussion_r15044062
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -712,8 +701,8 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
va
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/1447#discussion_r15044028
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -712,8 +701,8 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
v
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/1447#discussion_r15043860
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -216,17 +216,17 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/1447#discussion_r15043849
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -571,12 +571,7 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
th
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1450#issuecomment-49266828
QA tests have started for PR 1450. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16770/consoleFull
---
If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1459#issuecomment-49266193
QA tests have started for PR 1459. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16769/consoleFull
---
If
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1450#discussion_r15043414
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -214,7 +214,7 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
thro
GitHub user mengxr opened a pull request:
https://github.com/apache/spark/pull/1459
update CHANGES.txt
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/mengxr/spark v0.9.2-rc
Alternatively you can review and apply these changes a
201 - 227 of 227 matches
Mail list logo