Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2343#discussion_r17506390
--- Diff: core/src/main/scala/org/apache/spark/TaskContext.scala ---
@@ -77,6 +77,8 @@ class TaskContext(
/**
* Add a listener in the form
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2343#issuecomment-55469882
I left a minor comment - Otherwise LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2343#issuecomment-55469931
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2339#issuecomment-55546927
One more thing we could do is to get instance status check reports using
boto. This shows up in the web ui as `2/2 checks passed` etc. and we should be
able to get
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2404#issuecomment-55691649
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/2510
Set EC2 version to 1.1.0 and update version map
This brings the master branch in sync with branch-1.1
You can merge this pull request into a Git repository by running:
$ git pull https
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2510#issuecomment-56570875
@pwendell Can we add this to the release process ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2510#issuecomment-56574310
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2510#issuecomment-56578228
@shaneknapp Any ideas why the jenkins checkout keeps failing ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2510#issuecomment-56595994
Thanks @shaneknapp for taking a look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/1309#discussion_r15258695
--- Diff: core/src/main/scala/org/apache/spark/Accumulators.scala ---
@@ -51,6 +51,13 @@ class Accumulable[R, T] (
Accumulators.register
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1309#issuecomment-49807746
At a high level this is great and I think this will address my use case
that I outline in the mailing list. I do like the idea of reusing the
accumulator machinery we
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1309#issuecomment-49808077
Its worthwhile to keep in mind https://github.com/apache/spark/pull/1056
which adds partial updates to TaskMetrics. Another reason why unifying these
things might
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1471#issuecomment-49911839
Actually hold off on merging this -- I found that this patch doesn't
completely solve the problem. The issue I think is that `finishConnect` throws
an IOException [1
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1549#issuecomment-49958588
This is a great idea. It might actually fix SPARK-2563 as well. I'll give
this a try soon
---
If your project is set up for it, you can reply to this email and have
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/731#issuecomment-49966556
I haven't looked at the code change, but it might not be a good idea to
remove support for multiple workers in a machine. When you have really large
memory machines
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1579#issuecomment-50068305
Is there an estimate on the size of an event ? How much memory would 20
events take ?
---
If your project is set up for it, you can reply to this email and have
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1549#issuecomment-50101605
@pwendell Can this also be backported to 1.0 branch ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1471#issuecomment-50208195
@mateiz So I looked at this more closely today -- It turns out these
retries don't help much with Connection timed out exceptions. If the connection
attempt times out
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1471#issuecomment-50210528
Yeah I will close this PR -- Should I just modify SPARK-2563 for the Socket
re-opening issue or do you think a new JIRA is better ?
---
If your project is set up
Github user shivaram closed the pull request at:
https://github.com/apache/spark/pull/1471
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1471#issuecomment-50371592
Updated the JIRA -- Closing this issue
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/1697
[SPARK-2774 - Set preferred locations for reduce tasks
Motivation for the change is in JIRA. There are a couple of things that I
would like feedback about
1. Should we sort the map
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1697#issuecomment-50831954
Thanks for taking a look -- One thing I realized is that we only need top-5
and don't need to sort the data. I'll try to use the Guava Ordering class and
do some
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/1697#discussion_r15681022
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1152,6 +1155,18 @@ class DAGScheduler(
return locs
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/1697#discussion_r15681031
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -284,6 +290,24 @@ private[spark] class MapOutputTrackerMaster(conf:
SparkConf
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1697#issuecomment-50844818
I switched to using Guava's ordering function now and added another unit
test for that. I plan to do a microbenchmark to see how long it takes to get
top 5 from a list
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1697#issuecomment-50930229
I ran some microbenchmarks as outlined at
https://gist.github.com/shivaram/63620c47f0ad50106e0a
The comments below the gist have some numbers that I got on my laptop
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1697#issuecomment-50930865
One more thing we can do is to coalesce sizes from all tasks on a machine
and only do node-level locality. As map outputs are on disk there shouldn't be
any difference
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1679#issuecomment-50947160
@andrewor14 Thanks a lot for fixing this. I am testing this out on 200-node
EC2 cluster right now and so far it looks good. Its a long running job though,
so I'll
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1679#issuecomment-50952987
@pwendell @andrewor14
Yes - the run went fine. I didn't see any listener bus overflows and the UI
was fine. Also I used to previously see 1 CPU fully occupied
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1844#issuecomment-51563228
One thing thats good to keep in mind that often applications written on top
of Spark may also have their own log4j.properties and core-site.xml etc. So
putting Spark's
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1844#issuecomment-51643912
Agree about updating the documentation.
I don't know if application jars take precedence when you use SparkSubmit.
On the executor side I think JVMs are always
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/1869
Add gc time and shuffle write time to JobLogger
The JobLogger is very useful for performing offline performance profiling
of Spark jobs. GC Time and Shuffle Write time are available in TaskMetrics
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1869#issuecomment-51702275
Created a JIRA and updated the title.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/1865#discussion_r16033275
--- Diff:
core/src/main/scala/org/apache/spark/network/netty/FileClient.scala ---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1632#issuecomment-51864387
Another way to solve this is to change the BasicBlockFetcher to use `poll`
with a timeout in LinkedBlockingQueue [1]
[1]
http://docs.oracle.com/javase/7/docs
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1632#issuecomment-51873138
@sarutak You are right that using poll wouldn't clear up the internal state
in ConnectionManager. I think @JoshRosen 's idea of using a shared timer pool
or re-using
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1907#issuecomment-51957070
@rxin Very nice ! Do you have any benchmarks of how fast things are ? In
terms of say % of network bandwidth we can use ?
---
If your project is set up for it, you can
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/2829
[SPARK-3973] Print call site information for broadcasts
Its hard to debug which broadcast variables refer to what in a big
codebase. Printing call site information helps in debugging.
You can
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2829#issuecomment-59414792
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2829#issuecomment-59427068
@rxin Can you take a look ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2851#discussion_r19096569
--- Diff: core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala ---
@@ -30,7 +30,8 @@ import org.apache.spark.util.ActorLogReceive
private[spark
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2832#issuecomment-59824830
@kayousterhout - This is failing scalastyle checks -- Could you run style
check locally ?
---
If your project is set up for it, you can reply to this email and have
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2844#discussion_r19119727
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -104,29 +112,23 @@ private[spark] class TorrentBroadcast[T: ClassTag
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2844#discussion_r19119774
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -227,6 +217,7 @@ private object TorrentBroadcast extends Logging
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2844#discussion_r19121626
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -104,29 +112,23 @@ private[spark] class TorrentBroadcast[T: ClassTag
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/2871
[WIP] [SPARK-4031] Make torrent broadcast read blocks on use.
This avoids reading broadcast variables when they are referenced in the
closure but not used by the code.
Note: This is a WIP
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2871#issuecomment-59881872
@JoshRosen -- yes, that should be fine. I will rebase once #2844 is checked
in
---
If your project is set up for it, you can reply to this email and have your
reply
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2844#discussion_r19130957
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -227,6 +217,7 @@ private object TorrentBroadcast extends Logging
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2871#discussion_r19192935
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -63,12 +63,22 @@ private[spark] class TorrentBroadcast[T: ClassTag
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2871#issuecomment-60035820
I also added a test case to check if blocks are read on use
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2871#issuecomment-60121971
Yeah the only performance regression is in reading this volatile boolean if
you call `.value` many times. But I think that should be low (and it should be
insignificant
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60316298
@rxin - Here is a simpler design -- What if we report all broadcast blocks
to the master when they are added to a block manager as well (tellMaster = true
instead
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60316582
Sorry that link should have been
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala#L181
---
If your
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/2922
[SPARK-4030] Make destroy public for broadcast variables
This change makes the destroy function public for broadcast variables.
Motivation for the change is described in
https://issues.apache.org
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2922#issuecomment-60350235
cc @pwendell @rxin for review
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2922#discussion_r19373743
--- Diff: core/src/main/scala/org/apache/spark/broadcast/Broadcast.scala ---
@@ -87,10 +91,13 @@ abstract class Broadcast[T: ClassTag](val id: Long)
extends
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2922#discussion_r19373771
--- Diff: core/src/main/scala/org/apache/spark/broadcast/Broadcast.scala ---
@@ -60,6 +62,8 @@ abstract class Broadcast[T: ClassTag](val id: Long)
extends
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2922#issuecomment-60553667
@pwendell - I made destroy blocking by default and only made that version
public (its not clear we need the non-blocking version to also be public -- we
can add
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2922#issuecomment-60615291
Thanks. Merging this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1697#issuecomment-60624723
Ping @rxin -- Any thoughts on this ? I can merge to upstream and it'll be
great to have this in 1.2
---
If your project is set up for it, you can reply to this email
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2871#issuecomment-60710651
@rxin -- I changed it to use a lazy val and the unit test, my local testing
seems to suggest this works. Can you take a look ?
Also @JoshRosen I merged
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2871#discussion_r19479618
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -173,15 +175,21 @@ private[spark] class TorrentBroadcast[T:
ClassTag
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2871#discussion_r19479577
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -173,15 +175,21 @@ private[spark] class TorrentBroadcast[T:
ClassTag
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2871#discussion_r19480010
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -157,14 +161,12 @@ private[spark] class TorrentBroadcast[T:
ClassTag
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2871#issuecomment-60778158
Thanks @rxin and @JoshRosen for taking a look. I will merge after Jenkins
passes
---
If your project is set up for it, you can reply to this email and have your
reply
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2944#issuecomment-60818745
@JoshRosen This is super awesome !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2944#issuecomment-60837003
Do you know how large the threadDump is typically ? I'm concerned this
might make the heartbeat too large
---
If your project is set up for it, you can reply
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2944#issuecomment-60837888
The other idea I had was that we could just open a port on the executor and
have a web ui on it. This could also display the executor's stderr (Which is
very painful
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2944#issuecomment-60846384
Yes - I think having a separate RPC sounds good for now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2944#issuecomment-60857056
That sounds good -- Actually could we make this a request-reply pattern ?
i.e we only fetch the stack traces if somebody clicks on the link ?
---
If your project
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2944#issuecomment-60858892
Hmm okay - I agree that we don't really have a request - reply route from
the web ui (maybe this is also worth investigating if / when we have a executor
web ui
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2907#issuecomment-60877866
Yes - treeAggregate is very useful -- In fact I was going to suggest moving
it to the core RDD API. Any reasons to not do that ?
---
If your project is set up
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2988#issuecomment-61008323
@nchammas does the template replacement code still work correctly ? I am
referring to the deploy.generic dir that we pass from
https://github.com/apache/spark/blob
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2988#issuecomment-61021907
Thanks for taking a closer look ! I don't know much python, but can't we
get the directory that the script is in using something like `__file__` and
prefix
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2988#discussion_r19586967
--- Diff: ec2/spark_ec2.py ---
@@ -718,12 +726,16 @@ def get_num_disks(instance_type):
return 1
-# Deploy the configuration file
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2988#issuecomment-61043164
Functionality LGTM. I left a minor style question for @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/3008#issuecomment-61044027
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/2931#discussion_r19612661
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/rdd/WriteAheadLogBackedBlockRDD.scala
---
@@ -0,0 +1,125 @@
+/*
+ * Licensed
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2907#issuecomment-61138697
Agree that using aggregate vs. treeAggregate depends on the computation,
reduction function -- but I don't think its specific to MLLib per se. Any Spark
application
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2744#issuecomment-61317915
This patch seems to have broken the build ? See
https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.4,label=centos/821
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2744#issuecomment-61319258
I have a commit to just comment out the test at
https://github.com/shivaram/spark-1/compare/fix-yarn-build?expand=1 -- But if
you have a better fix, we can use
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/3040
HOTFIX: Comment out YarnAllocatorSuite test case to fix broken build
@sryza @tgravescs -- Let me know if you have a better fix
You can merge this pull request into a Git repository by running
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/3041#issuecomment-61320414
This might be a better fix than https://github.com/apache/spark/pull/3040
LGTM
---
If your project is set up for it, you can reply to this email and have your
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/3041#issuecomment-61320678
Well my fix was just commenting out the test case -- I think this is better.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/3040#issuecomment-61320862
I am going to close this in favor of
https://github.com/apache/spark/pull/3041
---
If your project is set up for it, you can reply to this email and have your
reply
Github user shivaram closed the pull request at:
https://github.com/apache/spark/pull/3040
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/1471
[SPARK-2563] Make connection retries configurable
In a large EC2 cluster, I often see the first shuffle stage in a job fail
due to connection timeout exceptions. This patch makes the number
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1488#issuecomment-49489355
LGTM. BTW it would be good to add average task size and average task result
size per stage at the top. Is that something we track ?
---
If your project is set up
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/219
Fix scheduler to account for tasks using 1 CPUs.
Move CPUS_PER_TASK to TaskSchedulerImpl as the value is a constant and use
it in both Mesos and CoarseGrained scheduler backends.
Thanks
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/219#discussion_r10915339
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -62,6 +62,9 @@ private[spark] class TaskSchedulerImpl
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/219#discussion_r10915407
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -62,6 +62,9 @@ private[spark] class TaskSchedulerImpl
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/219#issuecomment-38524982
FYI - there was a test failure in TaskSetManagerSuite as a unit test was
checking for availableCpus being zero. So I added back a non-zero check in
TaskSetManager
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/219#issuecomment-38529622
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/219#discussion_r10918164
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -388,7 +385,7 @@ private[spark] class TaskSetManager
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/221#issuecomment-38597584
Jenkins, this is okay to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/219#issuecomment-38601404
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/219#issuecomment-38601384
Failure was in recovery with file input stream.recovery with file input
stream -- Something that I think is completely unrelated to this change. So
lets try again
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/237#discussion_r10996156
--- Diff: project/SparkBuild.scala ---
@@ -395,8 +404,6 @@ object SparkBuild extends Build {
// assembly jar.
def hiveSettings = sharedSettings
1 - 100 of 2516 matches
Mail list logo