Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3788
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3788#issuecomment-68171484
Thanks for this patch and @srowen for the review! I'll pull this in.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3790
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3790#issuecomment-68171361
Actually I can just merge this and add the fix.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your p
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3790#issuecomment-68171209
Looks good - but can we use the exact same wording in both cases? It will
be obvious from the previous log message whether it is the master or worker:
I'd just s
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-68171181
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-68171178
[Test build #24841 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24841/consoleFull)
for PR 1290 at commit
[`9fb76ba`](https://gith
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3716
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3716#issuecomment-68171007
Yeah this looks good - thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not h
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3046
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3046#issuecomment-68170894
Looks good - I'm going to merge this with a slight modification (adding a
comment to explain whats going on).
---
If your project is set up for it, you can reply to thi
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2633
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/265
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabl
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2031
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1602
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2348
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3662
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2059
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3456
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3456#issuecomment-68170532
Let's close this issue. This breaks global pagination which means it can't
be merged.
---
If your project is set up for it, you can reply to this email and have your
re
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3766#issuecomment-68170290
Okay - makes sense. There is one incorrect change in here, but once that's
removed we can merge this.
---
If your project is set up for it, you can reply to this email
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/3766#discussion_r22291358
--- Diff: pom.xml ---
@@ -97,6 +97,7 @@
sql/catalyst
sql/core
sql/hive
+sql/hive-thriftserver
--- End diff --
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-68170272
[Test build #24842 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24842/consoleFull)
for PR 3707 at commit
[`0e5a0e4`](https://githu
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-68170254
Jenkins, test this please. LGTM pending tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pr
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-68169801
[Test build #24841 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24841/consoleFull)
for PR 1290 at commit
[`9fb76ba`](https://githu
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/3778#issuecomment-68169633
Would like to add that the solution based on Spire `Interval` I posted
above may suffer from floating point precision issue. Thus we might want to
cast all integral com
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68169549
Hey @scwf, I've posted my reply in #3778, so lets discuss these rules there
to prevent distraction.
---
If your project is set up for it, you can reply to this email a
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/3778#issuecomment-68169507
Actually I'd highly suggest you breaking this PR into at least two self
contained PRs, which can be much easier to review and merge. Rule sets 1 and 4
can be merged int
Github user tigerquoll commented on the pull request:
https://github.com/apache/spark/pull/3426#issuecomment-68166415
http://stackoverflow.com/questions/285793/what-is-a-serialversionuid-and-why-should-i-use-it
seems to be a good summary of the pros and cons of this approach
---
If
Github user SaintBacchus commented on the pull request:
https://github.com/apache/spark/pull/3576#issuecomment-68166365
Hi, @marmbrus @vanzin this problem also had influence to branch-1.2. Can we
need to fix it in branch-1.2?
---
If your project is set up for it, you can reply to thi
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3426#issuecomment-68166194
Err, across all compilers?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3426#issuecomment-68166190
Aren't default serialVersionUIDs generated in a consistent way across all
JVMs because the algorithm for generating them is part of the Java spec?
---
If your project
Github user tigerquoll commented on the pull request:
https://github.com/apache/spark/pull/3426#issuecomment-68165724
Heh @JoshRosen @sryza , should this patch include a serialVersionUID
attribute on the classes to be serialized to make sure compiler quirks don't
cause different UIDs
Github user matt2000 commented on the pull request:
https://github.com/apache/spark/pull/3813#issuecomment-68165298
This is not the right fix. Still working on it...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user brennonyork commented on the pull request:
https://github.com/apache/spark/pull/3559#issuecomment-68165117
@andrewor14 @JoshRosen wondering what should be done with this issue,
thoughts on my comments above??
---
If your project is set up for it, you can reply to this ema
Github user brennonyork commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-68164728
@nchammas I spoke too soon earlier regarding it correctly handling relative
paths. I fixed it and is now `pwd`-preserving. @pwendell I also fixed the
improper quoting
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3813#issuecomment-68164558
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pro
GitHub user matt2000 opened a pull request:
https://github.com/apache/spark/pull/3813
[SPARK-4974]: Prevent Circular dependency.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/matt2000/spark master
Alternatively you can review
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2703#discussion_r22290126
--- Diff:
streaming/src/test/java/org/apache/spark/streaming/JavaAPISuite.java ---
@@ -1703,6 +1710,65 @@ public void testTextFileStream() {
JavaDStream
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2703#discussion_r22290121
--- Diff:
streaming/src/test/java/org/apache/spark/streaming/JavaAPISuite.java ---
@@ -1703,6 +1710,65 @@ public void testTextFileStream() {
JavaDStream
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2703#discussion_r22290108
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/api/java/JavaStreamingContext.scala
---
@@ -250,19 +250,19 @@ class JavaStreamingContext(val ssc:
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2806#issuecomment-68163696
Also, I took a quick look at the PR. Its seems a little complicated to
understand just by looking at the code, so could you write a short design doc
(or update the PR descri
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2806#issuecomment-68163636
There has been significant refactoring done in the FileInputStream. Can you
update the PR accordingly?
---
If your project is set up for it, you can reply to this email and
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-68161776
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-68161775
[Test build #24840 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24840/consoleFull)
for PR 3707 at commit
[`d2d41b6`](https://gith
Github user brennonyork commented on a diff in the pull request:
https://github.com/apache/spark/pull/3707#discussion_r22289411
--- Diff: sbt/sbt ---
@@ -1,111 +1,9 @@
-#!/usr/bin/env bash
+#!/bin/bash
-# When creating new tests for Spark SQL Hive, the HADOOP_CLASS
Github user brennonyork commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-68161145
This will handle relative directories just fine. The last portion of this
script changes the directory back to the `cwd` where the user was calling from
so this isn't
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/2348#issuecomment-68159339
Clickable link for the lazy: [Spark Packages](http://spark-packages.org/)
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3805#issuecomment-68159276
[Test build #24839 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24839/consoleFull)
for PR 3805 at commit
[`41ede0e`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3805#issuecomment-68159278
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-68158971
@brennonyork Does this handle relative paths passed to Maven correctly (if
that's a valid potential use case)? We had this problem with the `spark-ec2`
script which was
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-68158618
One small correctness question (around quoting) but looks good to me. I can
merge this later today and fix it manually if @brennonyork doesn't get around
to it.
---
If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-68158356
[Test build #24840 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24840/consoleFull)
for PR 3707 at commit
[`d2d41b6`](https://githu
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3809#issuecomment-68158250
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3809#issuecomment-68158247
[Test build #24838 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24838/consoleFull)
for PR 3809 at commit
[`2172578`](https://gith
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3707#issuecomment-68158154
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1658#discussion_r22288031
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -510,6 +510,52 @@ class SparkContext(config: SparkConf) extends Logging {
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3803#discussion_r22287997
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/StreamingContext.scala ---
@@ -373,6 +393,25 @@ class StreamingContext private[streaming] (
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3803#discussion_r22287961
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/StreamingContext.scala ---
@@ -373,6 +393,25 @@ class StreamingContext private[streaming] (
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3803#discussion_r22287941
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/StreamingContext.scala ---
@@ -373,6 +393,25 @@ class StreamingContext private[streaming] (
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3803#discussion_r22287887
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/InputStreamsSuite.scala ---
@@ -233,6 +236,47 @@ class InputStreamsSuite extends TestSuiteBase
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3803#discussion_r22287877
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/InputStreamsSuite.scala ---
@@ -233,6 +236,47 @@ class InputStreamsSuite extends TestSuiteBase
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3805#issuecomment-68156344
Also, the PR / JIRA title is confusing; I can't really guess what this
patch does based on the title, since "fix an implicit bug" could mean many
different things. A b
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3805#issuecomment-68156238
This class of issue could be a more general problem for our test-suites,
since I think there are a number of places where we call things like `new
SparkConf()` that mig
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3805#issuecomment-68156186
[Test build #24839 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24839/consoleFull)
for PR 3805 at commit
[`41ede0e`](https://githu
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3805#issuecomment-68156052
Super-minor process nit, but do you mind moving your comment into the PR
description itself? The PR description automatically becomes the commit
message, so keeping it
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3805#issuecomment-68156011
Jenkins, this is ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3807
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3807#issuecomment-68155797
LGTM, so I'll merge this.
In the future, I wouldn't bother to file JIRA issues for super-small
one-word documentation fixes like this, since the JIRA issue is e
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3809#issuecomment-68155681
By the way, I left a [comment over on
JIRA](https://issues.apache.org/jira/browse/SPARK-4787?focusedCommentId=14259202&page=com.atlassian.jira.plugin.system.issuetabpane
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3809#issuecomment-68155357
This is a nice fix.
Resource leaks when SparkContext's constructor throws exceptions have been
a longstanding issue. I first ran across the issue while adding
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3809#issuecomment-68155166
[Test build #24838 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24838/consoleFull)
for PR 3809 at commit
[`2172578`](https://githu
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3809#issuecomment-68155076
Jenkins, this is ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3809#discussion_r22287446
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -329,8 +329,11 @@ class SparkContext(config: SparkConf) extends Logging
with Execut
Github user koertkuipers commented on the pull request:
https://github.com/apache/spark/pull/3632#issuecomment-68150977
@markhamstra take a look now.
i ignored the situation of K and V having same type, since i think it can
be dealt with by using a simple wrapper (value) class for
Github user koeninger commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-68149432
Hi @jerryshao
I'd politely ask that anyone with questions read at least KafkaRDD.scala
and the example usage linked from the jira ticket (it's only about 50
s
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/3811#issuecomment-68143249
If we run client-mode including local-mode, driver runs on client and
executors doesn't run on client so no one shared the local directories of the
driver.
---
If your
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/3812#issuecomment-68143144
There is not a functional change for Spark itself but it's rather than for
other systems associating with Spark, like monitoring systems. The property is
used for metrics
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3811#issuecomment-68142275
Does this define a new system property just for deployment mode? This logic
looks like it is applied even when external shuffle service is not enabled. Why
is the driver b
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3812#issuecomment-68142138
Is there a functional change here? The value is now instead of
driver. It sounds good to be consistent but I wonder if there is a reason for
the dfference.
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3046#issuecomment-68141518
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3046#issuecomment-68141516
[Test build #24837 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24837/consoleFull)
for PR 3046 at commit
[`41ef90e`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3812#issuecomment-68141397
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3812#issuecomment-68141394
[Test build #24835 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24835/consoleFull)
for PR 3812 at commit
[`4275663`](https://gith
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3433#issuecomment-68141153
[Test build #24836 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24836/consoleFull)
for PR 3433 at commit
[`9b94d48`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3433#issuecomment-68141157
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/3778#issuecomment-68140510
Not only, actually this PR cover optimizations as follows:
```
And/Or with same condition
a && a => a , a && a && a ... => a
a || a => a , a || a || a ... =
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3811#issuecomment-68140420
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3811#issuecomment-68140417
[Test build #24834 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24834/consoleFull)
for PR 3811 at commit
[`d99718e`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3810#issuecomment-68140337
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3810#issuecomment-68140334
[Test build #24833 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24833/consoleFull)
for PR 3810 at commit
[`05469de`](https://gith
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3433#issuecomment-68139017
[Test build #24836 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24836/consoleFull)
for PR 3433 at commit
[`9b94d48`](https://githu
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3046#issuecomment-68139015
[Test build #24837 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24837/consoleFull)
for PR 3046 at commit
[`41ef90e`](https://githu
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3812#issuecomment-68138840
[Test build #24835 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24835/consoleFull)
for PR 3812 at commit
[`4275663`](https://githu
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/3812
[Minor] Fix the value represented by spark.executor.id for the driver of
local mode.
When we run application in local mode, the property `spark.executor.id`
represents `driver` for the driver.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3811#issuecomment-68138582
[Test build #24834 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24834/consoleFull)
for PR 3811 at commit
[`d99718e`](https://githu
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/3811
[SPARK-4973][CORE] Local directory in the driver of client-mode continues
remaining even if application finished when external shuffle is enabled
When we enables external shuffle service, local dire
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3810#issuecomment-68137843
[Test build #24833 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24833/consoleFull)
for PR 3810 at commit
[`05469de`](https://githu
GitHub user YanTangZhai opened a pull request:
https://github.com/apache/spark/pull/3810
[SPARK-4962] [CORE] Put TaskScheduler.start back in SparkContext to shorten
cluster resources occupation period
When SparkContext object is instantiated, TaskScheduler is started and some
resou
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3808#issuecomment-68134354
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24
1 - 100 of 105 matches
Mail list logo