Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70219575
[Test build #25643 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25643/consoleFull)
for PR 4065 at commit
[`83735da`](https://githu
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4043#issuecomment-70219564
Good point, in map in scala remove returns an Option and does not throw
exception. However in lists what you said holds. See
https://github.com/scala/scala/blob/2.11.x
Github user ksakellis commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-70219531
So is this code you were referring to in HadoopRDD?
```scala
// Find a function that will return the FileSystem bytes read by this
thread. Do this before
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4043#issuecomment-70219039
Ah I see - so on the first point, the issue may be covered by the code you
referenced. For some reason the diff originally rendered in a way where I
didn't notice that.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70218782
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70218777
[Test build #25636 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25636/consoleFull)
for PR 4065 at commit
[`e9f1de3`](https://gith
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4008#issuecomment-70218720
[Test build #25642 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25642/consoleFull)
for PR 4008 at commit
[`d202e6e`](https://githu
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-70218728
The Scala stuff was mostly about the previous PR that got merged (and now
no longer showing up as part of this diff).
---
If your project is set up for it, you can reply to
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-70218719
[Test build #25641 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25641/consoleFull)
for PR 4067 at commit
[`3c2d021`](https://githu
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-70218672
Hi again - can't find my previous comment since the line is no longer in
the diff due to the other pr being merged. Can you still add comment for that
one (the part with Opt
Github user ksakellis commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-70218640
@rxin I updated the PR after doing a rebase and also incorporated some of
your feedback. You made two general comments:
1) specific implementations might've gone a b
Github user rcsenkbeil commented on the pull request:
https://github.com/apache/spark/pull/4034#issuecomment-70218446
Took a little longer on an old computer, but I made sure it built
successfully this time. Should hopefully be good to go now.
---
If your project is set up for it, yo
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3872#discussion_r23066523
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -383,6 +383,19 @@ private[hive] class HiveMetastoreCatalog(hive:
H
Github user tianyi commented on the pull request:
https://github.com/apache/spark/pull/3946#issuecomment-70218137
I mean I can't match a job and a execution via statement, if there are two
same sql running
> On Jan 16, 2015, at 11:21, Fei Wang wrote:
>
> bu
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4066#issuecomment-70217984
[Test build #25640 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25640/consoleFull)
for PR 4066 at commit
[`8c64d12`](https://githu
Github user ksakellis commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23066295
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -213,18 +213,19 @@ class HadoopRDD[K, V](
logInfo("Input split: " + spli
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3935#issuecomment-70217508
[Test build #25639 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25639/consoleFull)
for PR 3935 at commit
[`521bbd7`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3935#issuecomment-70217510
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3935#issuecomment-70217298
[Test build #25639 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25639/consoleFull)
for PR 3935 at commit
[`521bbd7`](https://githu
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/3637#issuecomment-70217317
Looks like MIMA failures...and this might need another update after
[https://github.com/apache/spark/pull/4065] gets merged
---
If your project is set up for it, you c
Github user javadba closed the pull request at:
https://github.com/apache/spark/pull/1586
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70217135
Same here; ML updates LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/3997#issuecomment-70217074
@hhbyyh @srowen There are some performance issues if we use unnecessary
index lookup. Having many `other.values(i)` calls is slower than `val
otherValues = other.values` a
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3120#discussion_r23066142
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -146,6 +185,10 @@ class TaskMetrics extends Serializable {
}
_
Github user ksakellis commented on a diff in the pull request:
https://github.com/apache/spark/pull/3120#discussion_r23066065
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -146,6 +185,10 @@ class TaskMetrics extends Serializable {
}
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23065966
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -213,18 +213,19 @@ class HadoopRDD[K, V](
logInfo("Input split: " + split.inp
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23065952
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -179,10 +223,48 @@ object DataWriteMethod extends Enumeration with
Serializable
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4043#issuecomment-70216469
Thanks Patrick !
Have two questions inline.
> I'm not sure this can be merged as-is. The state clean-up here is based
on the assumption that every stage th
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/4068#issuecomment-70216307
ping @marmbrus @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3120#discussion_r23065867
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -146,6 +185,10 @@ class TaskMetrics extends Serializable {
}
_
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-70215975
Hey @ksakellis - Thanks for working on this.
I took a very quick look at the patch. Overall I feel the patch should be
fairly straightforward, but the specific impl
Github user rcsenkbeil commented on the pull request:
https://github.com/apache/spark/pull/4034#issuecomment-70215690
Whoops, missed adding an import for the DeveloperApi on _SparkCommandLine_.
Give me a minute to add it.
---
If your project is set up for it, you can reply to this em
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23065689
--- Diff: core/src/test/scala/org/apache/spark/util/JsonProtocolSuite.scala
---
@@ -630,23 +659,27 @@ class JsonProtocolSuite extends FunSuite {
i
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23065675
--- Diff:
core/src/test/scala/org/apache/spark/storage/BlockObjectWriterSuite.scala ---
@@ -31,6 +31,8 @@ class BlockObjectWriterSuite extends FunSuite {
Github user ksakellis commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-70215626
This change was dependent on https://github.com/apache/spark/pull/3120,
that just got merged and now there are some merge conflicts. I need to fix
those first and will
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4068#issuecomment-70215432
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pro
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23065562
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -134,6 +149,30 @@ class TaskMetrics extends Serializable {
}
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/4068
[SPARK-5278][SQL] complete the check of ambiguous reference to fields
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/cloud-fan/spark simple
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23065524
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -179,10 +223,48 @@ object DataWriteMethod extends Enumeration with
Serializable
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4034#issuecomment-70215262
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-70215215
[Test build #25638 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25638/consoleFull)
for PR 4067 at commit
[`1572054`](https://githu
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4034#issuecomment-70215261
[Test build #25637 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25637/consoleFull)
for PR 4034 at commit
[`c1b88aa`](https://gith
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23065459
--- Diff:
core/src/main/scala/org/apache/spark/util/interceptingIterator.scala ---
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (A
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4034#issuecomment-70214919
[Test build #25637 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25637/consoleFull)
for PR 4034 at commit
[`c1b88aa`](https://githu
Github user ksakellis commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23065399
--- Diff:
core/src/main/scala/org/apache/spark/util/interceptingIterator.scala ---
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundati
Github user ksakellis commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23065373
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/hash/BlockStoreShuffleFetcher.scala
---
@@ -82,7 +82,16 @@ private[hash] object BlockStoreShuffleFet
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4034#issuecomment-70214681
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4043#issuecomment-70214406
I'm not sure this can be merged as-is. The state clean-up here is based on
the assumption that every stage that is pending will at some later time be
submitted. Is that
Github user ilganeli commented on the pull request:
https://github.com/apache/spark/pull/4020#issuecomment-70214178
Hi Patrick - I did look over 3120. That one will definitely need to be
merged first and then we can finish this. Thanks.
---
If your project is set up for it, you can
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23065223
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/hash/BlockStoreShuffleFetcher.scala
---
@@ -82,7 +82,16 @@ private[hash] object BlockStoreShuffleFetcher
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-70214097
Can you also paste some screenshots on what the UI changes look like?
Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear o
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23065199
--- Diff:
core/src/main/scala/org/apache/spark/util/interceptingIterator.scala ---
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (A
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70213836
MLlib changes look good to me:)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70213678
[Test build #25636 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25636/consoleFull)
for PR 4065 at commit
[`e9f1de3`](https://githu
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4008#issuecomment-70213415
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4008#issuecomment-70213414
[Test build #25635 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25635/consoleFull)
for PR 4008 at commit
[`a87e063`](https://gith
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4008#issuecomment-70213336
[Test build #25635 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25635/consoleFull)
for PR 4008 at commit
[`a87e063`](https://githu
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4048#issuecomment-70212728
@squito another thing is that we should look at whether these tests really
need to be integration style tests or not. I've seen people often use
`local-cluster` because
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4008#discussion_r23064804
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/scheduler/ReceiverTracker.scala
---
@@ -274,6 +284,7 @@ class ReceiverTracker(ssc: StreamingCo
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4008#discussion_r23064794
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/scheduler/ReceiverTracker.scala
---
@@ -23,10 +23,20 @@ import scala.language.existentials
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2310#issuecomment-70212481
@ash211 I think we can close this issue now that we have merged #3120
---
If your project is set up for it, you can reply to this email and have your
reply appear on Gi
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/4043#issuecomment-70212456
Hi Imran, Thanks for taking a look. @pwendell please take a look ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user Lewuathe commented on the pull request:
https://github.com/apache/spark/pull/3975#issuecomment-70211978
@srowen @mengxr There seems to be no checking labels logics on each
algorithms when importing `LabeledPoint`. But these are only check for first
access to `LabeledPoint
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4059#issuecomment-70208901
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4059#issuecomment-70208896
[Test build #25634 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25634/consoleFull)
for PR 4059 at commit
[`f82750b`](https://gith
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/3732#issuecomment-70208781
Adrian - as we spoke offline, it would be simpler (for future datetime
related features) to just represent the Date type as a primitive int
internally, and convert to java.s
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3629#issuecomment-70208695
[Test build #25633 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25633/consoleFull)
for PR 3629 at commit
[`f0e80f2`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3629#issuecomment-70208700
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70208067
[Test build #25632 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25632/consoleFull)
for PR 4065 at commit
[`500d2c4`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70208071
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4066#discussion_r23062798
--- Diff: core/src/main/scala/org/apache/spark/SparkHadoopWriter.scala ---
@@ -105,10 +107,20 @@ class SparkHadoopWriter(@transient jobConf: JobConf)
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4066#issuecomment-70206625
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4066#issuecomment-70206621
**[Test build #25629 timed
out](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25629/consoleFull)**
for PR 4066 at commit
[`c25c997`](https://git
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4006#issuecomment-70206157
[Test build #25631 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25631/consoleFull)
for PR 4006 at commit
[`0710364`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4006#issuecomment-70206158
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4059#issuecomment-70205823
[Test build #25634 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25634/consoleFull)
for PR 4059 at commit
[`f82750b`](https://githu
Github user FlytxtRnD commented on the pull request:
https://github.com/apache/spark/pull/4059#issuecomment-70205738
@jkbradley py4j serialization issue has been solved by the commit
https://github.com/apache/spark/commit/8ead999fd627b12837fb2f082a0e76e9d121d269
---
If your project
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3997#issuecomment-70205338
@hhbyyh Oops yes that's a typo. The helper function should refer to the
local argument `values` only!
---
If your project is set up for it, you can reply to this email an
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3629#issuecomment-70204945
[Test build #25633 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25633/consoleFull)
for PR 3629 at commit
[`f0e80f2`](https://githu
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-70204733
[Test build #25628 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25628/consoleFull)
for PR 4067 at commit
[`571cb69`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-70204738
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4016#issuecomment-70204370
[Test build #25630 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25630/consoleFull)
for PR 4016 at commit
[`aefa1ce`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4016#issuecomment-70204375
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/3629#issuecomment-70203787
@andrewor14
I will still releasePendingUnrollMemory() before [this
line](https://github.com/apache/spark/blob/4e1f12d997426560226648d62ee17c90352613e7/core/src
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70203676
[Test build #25632 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25632/consoleFull)
for PR 4065 at commit
[`500d2c4`](https://githu
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/3946#issuecomment-70202999
>but it would get wrong when the thrift server is executing two same SQL at
the same time.
you mean using two beeline to execute sql(must the same sql?) at same time
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4029#issuecomment-70202568
[Test build #25627 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25627/consoleFull)
for PR 4029 at commit
[`0af9e22`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4029#issuecomment-70202575
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r23061158
--- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
@@ -17,6 +17,8 @@
package org.apache.spark
+import org.apache.spark.ut
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4020#issuecomment-70201497
Conflicts abound!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/4020#discussion_r23060808
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -257,8 +257,8 @@ private[spark] class Executor(
val serviceTime =
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70201464
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70201461
[Test build #25626 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25626/consoleFull)
for PR 4065 at commit
[`ba3bfa2`](https://gith
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3120
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70200715
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/25
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4065#issuecomment-70200712
[Test build #25625 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25625/consoleFull)
for PR 4065 at commit
[`c4ae1c5`](https://gith
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4016#issuecomment-70199865
[Test build #25630 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25630/consoleFull)
for PR 4016 at commit
[`aefa1ce`](https://githu
Github user tianyi commented on the pull request:
https://github.com/apache/spark/pull/3946#issuecomment-70199823
@liancheng @scwf, since the `groupId` was removed in #3718, I can't find
connections between a `job` and a `statement execution` anymore. I tried to use
`statement` and `S
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/4016#discussion_r23060028
--- Diff: core/src/main/scala/org/apache/spark/util/EventLoop.scala ---
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) unde
Github user OopsOutOfMemory commented on the pull request:
https://github.com/apache/spark/pull/3935#issuecomment-70199626
Thanks. @rxin I'wll make this up-to-date.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
1 - 100 of 277 matches
Mail list logo