Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5545#issuecomment-93866737
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5546#issuecomment-93877539
[Test build #30452 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30452/consoleFull)
for PR 5546 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4402#issuecomment-93873823
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/5548
[SPARK-6963][CORE]Flaky test: o.a.s.ContextCleanerSuite automatically
cleanup checkpoint
cc @andrewor14
You can merge this pull request into a Git repository by running:
$ git pull
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/5497#discussion_r28563536
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLConf.scala ---
@@ -139,6 +141,8 @@ private[sql] class SQLConf extends Serializable {
*/
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-93882452
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5544#issuecomment-93865283
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user MechCoder commented on a diff in the pull request:
https://github.com/apache/spark/pull/5467#discussion_r28573221
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/feature/Word2Vec.scala ---
@@ -479,9 +492,16 @@ class Word2VecModel private[mllib] (
*/
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/5551
[SPARK-6975][Yarn] Fix argument validation error
`numExecutors` checking is failed when dynamic allocation is enabled with
default configuration. Details can be seen is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5467#issuecomment-93914210
[Test build #30464 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30464/consoleFull)
for PR 5467 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5427#issuecomment-93897975
[Test build #30458 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30458/consoleFull)
for PR 5427 at commit
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5149#discussion_r28569271
--- Diff: dev/merge_spark_pr.py ---
@@ -286,68 +281,137 @@ def resolve_jira_issues(title, merge_branches,
comment):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5530#issuecomment-93864879
[Test build #30439 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30439/consoleFull)
for PR 5530 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/5357#issuecomment-93868139
/cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/5497#discussion_r28563376
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLConf.scala ---
@@ -139,6 +141,8 @@ private[sql] class SQLConf extends Serializable {
*/
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4015#issuecomment-93900302
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-93898160
I'd like to finish reviewing this, but I keep getting pre-empted by other
work, so instead I'll leave a list of things that I would look at / check when
reviewing this
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/5549
[SPARK-5352][GraphX] Add getPartitionStrategy in Graph
Graph remembers an applied partition strategy in partitionBy() and returns
it via getPartitionStrategy().
This is useful in case of the
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/5178#issuecomment-93917004
@viper-kun What's the status of this patch? If you don't make further
updates, I'd like to brush up this patch.
---
If your project is set up for it, you can reply to
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4015#issuecomment-93900300
[Test build #30460 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30460/consoleFull)
for PR 4015 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5549#issuecomment-93893522
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5144#discussion_r28573550
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -0,0 +1,614 @@
+/*
+ * Licensed to the
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/5544#discussion_r28570802
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -999,6 +1017,13 @@ def _to_java_column(col):
return jcol
+def _to_seq(sc, cols,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5543#issuecomment-93862506
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5517
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4886#issuecomment-93885511
@srowen
I forgot to update desc, already refine
if program go here `if (!putLevel.useMemory) {`, means put a disk level
block, or memory_and_disk level
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/5546#issuecomment-93872501
Talk with @pwendell off-line and seems since we don't publish with SBT this
is pretty safe. He asked me to update the docs to make it clear why we don't
do this for
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5478#issuecomment-93902319
[Test build #30462 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30462/consoleFull)
for PR 5478 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5498#issuecomment-93909594
[Test build #30463 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30463/consoleFull)
for PR 5498 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5455#issuecomment-93896772
[Test build #30457 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30457/consoleFull)
for PR 5455 at commit
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/5442#issuecomment-93897920
@shivaram Should we merge this or wait for API audit?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5144#discussion_r28570218
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -0,0 +1,614 @@
+/*
+ * Licensed to the
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5144#discussion_r28573523
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -0,0 +1,614 @@
+/*
+ * Licensed to the
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4402#issuecomment-93892862
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4015#issuecomment-93922737
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user MechCoder commented on the pull request:
https://github.com/apache/spark/pull/5455#issuecomment-93903354
@mengxr It would be really helpful if you could guide me on my two
questions?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4015#issuecomment-93899838
[Test build #30460 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30460/consoleFull)
for PR 4015 at commit
Github user Sephiroth-Lin commented on the pull request:
https://github.com/apache/spark/pull/5256#issuecomment-93915251
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/5442#discussion_r28571272
--- Diff: docs/programming-guide.md ---
@@ -576,6 +660,34 @@ before the `reduce`, which would cause `lineLengths`
to be saved in memory after
/div
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4402#issuecomment-93892854
[Test build #30454 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30454/consoleFull)
for PR 4402 at commit
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/5550
[SPARK-6973]modify total stages/tasks on the allJobsPage
Though totalStages = allStages - skippedStages is understandable. But
consider the problem [SPARK-6973], I think totalStages = allStages
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/4138#issuecomment-93893277
Sorry but mistook to close, so re-make the PR.
https://github.com/apache/spark/pull/5549
---
If your project is set up for it, you can reply to this email and have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5455#issuecomment-93911463
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5544#issuecomment-93896816
[Test build #30456 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30456/consoleFull)
for PR 5544 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4015#issuecomment-93901819
[Test build #30461 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30461/consoleFull)
for PR 4015 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4015#issuecomment-93934762
[Test build #30467 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30467/consoleFull)
for PR 4015 at commit
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/5547#issuecomment-93864988
cc @andrewor14 @pwendell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5546#issuecomment-93887188
[Test build #30452 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30452/consoleFull)
for PR 5546 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4402#issuecomment-93873653
[Test build #30450 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30450/consoleFull)
for PR 4402 at commit
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-93898567
@andrewor14 I've fixed what you issue. Please review and merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-93889612
Hey @squito it looks like the automated dependency checking isn't working
so well for this PR. Can you do a diff and list all of the dependencies this is
adding to or
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5436#issuecomment-93918020
[Test build #30465 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30465/consoleFull)
for PR 5436 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5498#issuecomment-93909599
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/4402#issuecomment-93892969
ok, fixed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5455#issuecomment-93911425
[Test build #30457 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30457/consoleFull)
for PR 5455 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4138#issuecomment-93902330
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-93913097
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5144#discussion_r28570148
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -0,0 +1,614 @@
+/*
+ * Licensed to the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5538#issuecomment-93899836
[Test build #30459 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30459/consoleFull)
for PR 5538 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5256#issuecomment-93917056
[Test build #688 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/688/consoleFull)
for PR 5256 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5551#issuecomment-93928521
[Test build #30466 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30466/consoleFull)
for PR 5551 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5478#issuecomment-93930774
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5427#issuecomment-93930521
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5149#issuecomment-93891216
Hey @texasmichelle thanks for contributing this. It slipped of my radar but
it will be nice to get something like this in. One thing though, even though I
originally
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5427#issuecomment-93930478
[Test build #30458 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30458/consoleFull)
for PR 5427 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5478#issuecomment-93930705
[Test build #30462 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30462/consoleFull)
for PR 5478 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5544#issuecomment-93931726
[Test build #30456 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30456/consoleFull)
for PR 5544 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5544#issuecomment-93931780
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/5547#issuecomment-93865113
Jenkins, this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5144#discussion_r28573613
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -0,0 +1,614 @@
+/*
+ * Licensed to the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4138#issuecomment-93902325
**[Test build #30455 timed
out](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30455/consoleFull)**
for PR 4138 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5498#issuecomment-93909418
[Test build #30463 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30463/consoleFull)
for PR 5498 at commit
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/5436#issuecomment-93917450
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4015#issuecomment-93922706
[Test build #30461 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30461/consoleFull)
for PR 4015 at commit
Github user hhbyyh commented on the pull request:
https://github.com/apache/spark/pull/4419#issuecomment-93939245
@jkbradley Provide some update on Correctness test.
I have tested current PR with https://github.com/Blei-Lab/onlineldavb and
the result are identical. I've uploaded
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/5551#discussion_r28577855
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala ---
@@ -103,9 +103,14 @@ private[spark] class ClientArguments(args:
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-93946383
Yeah, there will be this result. But consider the bug described in the
jira, I think it's more reasonable.
---
If your project is set up for it, you can reply to this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5537#issuecomment-93949870
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5537#issuecomment-93949861
[Test build #30473 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30473/consoleFull)
for PR 5537 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5541#issuecomment-93956221
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5541#issuecomment-93956208
[Test build #30469 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30469/consoleFull)
for PR 5541 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5514
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4602#discussion_r28575468
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
---
@@ -107,6 +113,12 @@ trait CheckAnalysis {
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4602#discussion_r28575507
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -473,10 +473,47 @@ class Analyzer(
*/
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5541#issuecomment-93939754
[Test build #30469 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30469/consoleFull)
for PR 5541 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5144#issuecomment-93942077
[Test build #30470 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30470/consoleFull)
for PR 5144 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5144#issuecomment-93942082
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5551#discussion_r28577520
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala ---
@@ -103,9 +103,14 @@ private[spark] class ClientArguments(args:
Github user SaintBacchus commented on the pull request:
https://github.com/apache/spark/pull/5537#issuecomment-93947685
@andrewor14 `TransportServer#bindRightPort` will use in `Netty` network, in
that case have a retry mechanism is a better way.
@vanzin I have clone the
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5436#issuecomment-93949224
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5256#issuecomment-93949537
[Test build #30474 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30474/consoleFull)
for PR 5256 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5552#issuecomment-93949334
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5436#issuecomment-93949211
[Test build #30465 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30465/consoleFull)
for PR 5436 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5553#issuecomment-93950877
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/5553
[SPARK-6976][SQL] drop table if exists src print ERROR info that should
not be printed when src not exists.
If table src not exists and run sql drop table if exists src, then some
ERROR info
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5467#issuecomment-93951771
[Test build #30477 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30477/consoleFull)
for PR 5467 at commit
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5144#discussion_r28575245
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -0,0 +1,614 @@
+/*
+ * Licensed to the
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5546#issuecomment-93937847
Hm, doesn't the class file name affect how it's found? that's how the
classloader finds the class. I also don't know of a specific instance where
this created a problem,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5467#issuecomment-93942492
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5467#issuecomment-93942477
[Test build #30464 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30464/consoleFull)
for PR 5467 at commit
1 - 100 of 388 matches
Mail list logo