Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/16888
taken over by https://github.com/apache/spark/pull/19884 ?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/14637
FYI, we have a build process that packages spark core, now that mesos is is
in its own artifact, this broke our build and deploy process, and its not
called out in release notes
---
If your
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/16714
@vanzin can you check this out please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/13713
@mgummelt do you know what still needs to be done to get this in?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13713#discussion_r83297642
--- Diff:
mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -369,9 +369,11 @@ trait MesosSchedulerUtils
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13713#discussion_r83113285
--- Diff:
mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackend.scala
---
@@ -59,6 +59,8 @@ private[spark] class
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13713#discussion_r83113266
--- Diff:
mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -129,6 +129,7 @@ private[spark] class
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13713#discussion_r83113237
--- Diff: docs/running-on-mesos.md ---
@@ -506,8 +506,13 @@ See the [configuration page](configuration.html) for
information on Spark config
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13713#discussion_r83112712
--- Diff:
mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackendSuite.scala
---
@@ -463,6 +463,21 @@ class
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13713#discussion_r82714540
--- Diff: docs/running-on-mesos.md ---
@@ -506,8 +506,13 @@ See the [configuration page](configuration.html) for
information on Spark config
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/13713
@tnachen let me know if there's any outstanding issues here.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/13713
Tested with `mvn test -Pmesos -pl mesos
-Dtest=MesosCoarseGrainedSchedulerBackendSuite`
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/13713
I'm still adding a basic test for this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/13713
I can update this today.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13715#discussion_r74952999
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -382,59 +382,97 @@ private[spark
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/14552
super cool, thanks @mgummelt !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/13713
@tnachen this test is a little larger than I originally anticipated. Let me
see if I can add some unit tests
---
If your project is set up for it, you can reply to this email and have your
reply
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/13713
OoooOOO master updated to 1.0.0
Fixed merge conflicts
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11157#discussion_r69344006
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -356,4 +374,233 @@ private[mesos] trait
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/11157
How would I set a JMX port or debug port with this approach?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/13713
@tnachen any comments?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user drcrallen reopened a pull request:
https://github.com/apache/spark/pull/13715
[SPARK-15992] [MESOS] Refactor MesosCoarseGrainedSchedulerBackend offer
consideration
The offer acceptance workflow is a little hard to follow and not very
extensible for future
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/13715
I broke something in the test suite, evaluating and I'll open this back in
a bit
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user drcrallen closed the pull request at:
https://github.com/apache/spark/pull/13715
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/10808
In https://issues.apache.org/jira/browse/SPARK-15992 I tried to cleanup the
offer consideration for the coarse executor a little more. The idea being that
using the "improved" r
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/10232
@PedroAlvarado I just looked at master and 2.0 and the limit of one task
per slave seems to have been removed. I do not think this is valid anymore.
Closing out ticket.
---
If your project
GitHub user drcrallen opened a pull request:
https://github.com/apache/spark/pull/13715
[SPARK-15992] [MESOS] Refactor MesosCoarseGrainedSchedulerBackend offer
consideration
The offer acceptance workflow is a little hard to follow and not very
extensible for future considerations
GitHub user drcrallen opened a pull request:
https://github.com/apache/spark/pull/13713
[SPARK-15994] [MESOS] Allow enabling Mesos fetch cache in coarse executor
backend
Mesos 0.23.0 introduces a Fetch Cache feature
http://mesos.apache.org/documentation/latest/fetcher/ which
Github user drcrallen commented on the issue:
https://github.com/apache/spark/pull/10232
@drcrallen Did https://github.com/apache/spark/pull/4027 covered the memory
constraints you were adding as part of this issue? I can't find another open or
closed issue that addresses the issue
GitHub user drcrallen opened a pull request:
https://github.com/apache/spark/pull/12301
[SPARK-14537] Make TaskSchedulerImpl waiting fail if context is shut down
This patch makes the postStartHook throw an IllegalStateException if the
SparkContext is shutdown while it is waiting
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r54676391
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -364,7 +379,29 @@ private[spark] class
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-179968762
@dragos removed
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r51910856
--- Diff: docs/running-on-mesos.md ---
@@ -387,6 +387,13 @@ See the [configuration page](configuration.html) for
information on Spark config
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r51928513
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -60,6 +62,12 @@ private[spark] class
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10993#issuecomment-180024934
@mgummelt I had done limits for memory per core in
https://github.com/apache/spark/pull/10232 in response to
https://issues.apache.org/jira/browse/SPARK-12248
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10232#issuecomment-180034849
local fork of spark at https://github.com/metamx/spark/tree/v1.5.2-mmx is
using this successfully, but since #10993 will probably be the way to go
forward, I'm
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r51638713
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -364,7 +379,27 @@ private[spark] class
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r51638882
--- Diff: docs/running-on-mesos.md ---
@@ -387,6 +387,13 @@ See the [configuration page](configuration.html) for
information on Spark config
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-178850761
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50593/consoleFull
is still running
---
If your project is set up for it, you can reply
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r51637351
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -364,7 +379,27 @@ private[spark] class
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r51638750
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -364,7 +379,27 @@ private[spark] class
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-178850512
Fail was dumb (couldn't fetch from git). Needs retest
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-178835670
@andrewor14 Addressed comments except for
https://github.com/apache/spark/pull/10319#discussion_r51638882
---
If your project is set up for it, you can reply
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176826287
@dragos thanks. My scalastyle keeps failing locally
(https://github.com/sbt/sbt/issues/2295), I'll see if I can get it fixed so
this stops failing scalastyle
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r51287938
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -364,7 +379,23 @@ private[spark] class
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176869277
```spark charlesallen$ ./dev/lint-scala
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
MaxPermSize=512m; support was removed in 8.0
Scalastyle
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r51287135
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -364,7 +379,23 @@ private[spark] class
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176890203
Failed on git fetch. @dragos how do I fix that?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176899821
`mvn scalastyle:check` reproduces the error. fixing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-176971876
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-173748543
@tnachen / @andrewor14 ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-172698287
Thanks for feedback. I'll get to the fixes here very shortly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-172731505
@dragos Updated
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-171356909
@tnachen Ping again regarding question in
https://github.com/apache/spark/pull/10319#issuecomment-169392267
---
If your project is set up for it, you can reply
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-171359979
@tnachen I also haven't done a good job at making it more clear previously
in this PR that the block manager does not properly cleanup without this patch.
See https
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-169392267
@tnachen Do you have any suggestions on ways to wait for executors to
report as being cleaned up before calling `mesosDriver.stop()`?
---
If your project is set up
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-165814882
To clarify... That should solve the block cleanup issue. It will not solve
the executor reporting incorrect status.
If the executors are to exit "cl
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-165809417
@tnachen For some reason the shutdown hooks are not finishing properly if
it receives a SIGTERM during shutdown. See logs in
https://issues.apache.org/jira/browse
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-165847162
Nope, SparkEnv.stop() does not block on multiple calls to make sure stop
has completed at least once. But even when ensuring that happens the shutdown
process still
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-165838673
I may have found the root cause of the failure to cleanup blocks. Testing
---
If your project is set up for it, you can reply to this email and have your
reply
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r48048812
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -45,6 +46,7 @@ private[spark] class
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r48048809
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -364,7 +379,22 @@ private[spark] class
Github user drcrallen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r48048840
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -60,6 +63,11 @@ private[spark] class
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-165875292
After more thorough testing, it seems that there is still a race for
getting a `FINISHED` vs `KILLED` final status with this patch.
here's an example log
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-165889542
The block manager is cleaning up as expected with this patch.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-165875644
You can see in the above log entry where the terminal heap information was
printed... THEN a SIGTERM was processed.
---
If your project is set up for it, you can
GitHub user drcrallen opened a pull request:
https://github.com/apache/spark/pull/10319
[SPARK-12330] [CORE] Fix mesos coarse mode cleanup
In the current implementation the mesos coarse scheduler does not wait for
the mesos tasks to complete before ending the driver. This causes
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/10232#issuecomment-163724877
Doesn't affect heap memory properly, closing until fixed
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user drcrallen closed the pull request at:
https://github.com/apache/spark/pull/10232
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user drcrallen opened a pull request:
https://github.com/apache/spark/pull/10232
[SPARK-12248] Adds limits per cpu for mesos coarse mode
* Add spark.cores.mb.min for minimum memory per core
* Add spark.cores.mb.max for max memory per core
You can merge this pull request
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/9243#issuecomment-151648828
This is still getting pretty stuck. any clue if its an actual problem or
just a weirdness in the test system? I'm very interested in cherry picking
this to a local
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/9243#issuecomment-150579219
@srowen I was improperly asking if that change would be appropriate to
include in this PR :)
---
If your project is set up for it, you can reply to this email
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/9243#issuecomment-150575409
Regarding SPARK-11016 , can references to Roaring also be removed from
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ?
---
If your project
Github user drcrallen commented on the pull request:
https://github.com/apache/spark/pull/9243#issuecomment-150581844
General FYI, we did a lot of analysis on various bitmap implementations for
the Druid project. We were more interested in the direct-memory (mmap'd file)
capabilities
74 matches
Mail list logo