then waiting for spark jobs to
> recover, then rolling another agent is not at all practical. It is a huge
> benefit if we can just update the agents in bulk (or even sequentially, but
> only waiting for the mesos agent to recover).
>
> On Wed, May 24, 2017 at 11:17 AM Michael Gummelt <m
s per https://issues.apache.org/jira/browse/SPARK-4899
> >> >
> >> > org.apache.spark.scheduler.cluster.mesos.MesosSchedulerUtils#
> createSchedulerDriver
> >> > allows checkpointing, but only
> >> > org.apache.spark.scheduler.cluster.mesos.MesosClusterScheduler uses
> it.
> >> > Is
> >> > there a reason for that?
>
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
ems that CPU usage is
> just a "label" for an executor on Mesos. Where's this in the code?
>
> Pozdrawiam,
> Jacek Laskowski
>
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark 2.0 https://bit.ly/mastering-apache-spark
> Follow me at https://
on?
> >>
> >> Tim
> >>
> >> On Mon, Dec 19, 2016 at 2:45 PM, Mehdi Meziane
> >> <mehdi.mezi...@ldmobile.net> wrote:
> >> > We will be interested by the results if you give a try to Dynamic
> >> allocation
> >> > wi
cleanup its resources.
>
>
> Regards
> Sumit Chawla
>
>
> On Mon, Dec 19, 2016 at 12:45 PM, Michael Gummelt <mgumm...@mesosphere.io>
> wrote:
>
>> > I should preassume that No of executors should be less than number of
>> tasks.
>>
>> No.
gt;> > number starts decreasing. How ever, the number of CPUs does not
>>> decrease
>>> > propotionally. When the job was about to finish, there was a single
>>> > remaininig task, however CPU count was still 20.
>>> >
>>> > My questions, is why there is no one to one mapping between tasks and
>>> cpus
>>> > in Fine grained? How can these CPUs be released when the job is done,
>>> so
>>> > that other jobs can start.
>>> >
>>> >
>>> > Regards
>>> > Sumit Chawla
>>>
>>
>>
>
--
Michael Gummelt
Software Engineer
Mesosphere
d and request them again later when there is demand. This feature is
> particularly useful if multiple applications share resources in your Spark
> cluster.
>
> - Mail Original -
> De: "Sumit Chawla" <sumitkcha...@gmail.com>
> À: "Michael Gu
in Fine grained? How can these CPUs be released when the job is done, so
> that other jobs can start.
>
>
> Regards
> Sumit Chawla
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
mesos in cluster mode. Then submitted a long
> running job succeeded.
>
> Then I want to kill the job.
> How could I do that? Is there any similar commands as launching spark
> on yarn?
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
--
Michael Gummelt
Software Engineer
Mesosphere
grade to this stable
>>> release.
>>>
>>> To download Apache Spark 2.0.1, visit http://spark.apache.org/downlo
>>> ads.html
>>>
>>> We would like to acknowledge all community members for contributing
>>> patches to this release.
>>>
>>>
>>>
>>
>>
>> --
>> --
>> Cheers,
>> Praj
>>
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
RC
> > once SPARK-17666 and SPARK-17673 are fixed.
> >
> > Please shout if you disagree.
>
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
e changed to behave like the
> others (e.g. only enabled when the YARN code changes).
>
> --
> Marcelo
>
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
/R and checkpointing too.
>>
>>
>>- We have to load every single RDD that spark core over kerberized
>> HDFS without breaking the Spark API.
>>
>>
>>
>>
>> As you can see, We have a "special" requirement need to set the proxy
>> user by job over the same spark context.
>>
>> Do you have any idea to cover it?
>>
>>
>
--
Michael Gummelt
Software Engineer
Mesosphere
laskowski
>
>
> On Fri, Aug 26, 2016 at 10:20 PM, Michael Gummelt
> <mgumm...@mesosphere.io> wrote:
> > Hello devs,
> >
> > Much like YARN, Mesos has been refactored into a Maven module. So when
> > building, you must add "-Pmesos" to enable Mesos suppo
Hello devs,
Much like YARN, Mesos has been refactored into a Maven module. So when
building, you must add "-Pmesos" to enable Mesos support.
The pre-built distributions from Apache will continue to enable Mesos.
PR: https://github.com/apache/spark/pull/14637
Cheers
--
Micha
."
However, it seems that RPC can be SASL encrypted as well:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rpc/netty/NettyRpcEnv.scala#L64
Is this accurate? If so, I'll submit a PR to update the docs.
--
Michael Gummelt
Software Engineer
Mesosphere
dits.
>
> On Mon, Jul 18, 2016 at 10:00 PM, Michael Gummelt
> <mgumm...@mesosphere.io> wrote:
> > I just flailed on this a bit before finding this email. Can someone
> please
> > update
> >
> https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Too
>> >> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> >> For additional commands, e-mail: dev-h...@spark.apache.org
>> >>
>> >
>>
>>
>>
>> --
>> Marcelo
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> For additional commands, e-mail: dev-h...@spark.apache.org
>>
>>
>
--
Michael Gummelt
Software Engineer
Mesosphere
hed on the website separately from the
> main release so we do not need to block the release due to documentation
> errors either.
>
>
> Note: There was a mistake made during "rc3" preparation, and as a result
> there is no "rc3", but only "rc4".
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
great contribution for someone who does have the time and the
>>>> chops to build it.
>>>>
>>>> Cheers,
>>>>
>>>> Michael
>>>> -
>>>> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>>>>
>>>>
>>>
>>
>> --
>> Cell : 425-233-8271
>> Twitter: https://twitter.com/holdenkarau
>>
>>
--
Michael Gummelt
Software Engineer
Mesosphere
. Can someone confirm that this is in fact a bug? If so, I'm happy
to submit a PR.
--
Michael Gummelt
Software Engineer
Mesosphere
umerous FetchFailure exceptions, Tasks
>>> recomputed or Stages aborted, etc. so that the net effect is not all that
>>> much different than if the shuffle files had not been relocated to HDFS and
>>> the Executors or ShuffleService instances had just disappeared along with
>
Hm while this is an attractive idea in theory, in practice I think you are
> substantially overestimating HDFS' ability to handle a lot of small,
> ephemeral files. It has never really been optimized for that use case.
>
> On Thu, Apr 28, 2016 at 11:15 AM, Michael Gummelt <mgumm..
ared along with
> the worker nodes?
>
> On Thu, Apr 28, 2016 at 10:46 AM, Michael Gummelt <mgumm...@mesosphere.io>
> wrote:
>
>> > Why would you run the shuffle service on 10K nodes but Spark executors
>> on just 100 nodes? wouldn't you also run that service just on
ou in comparison? There's some
> additional overhead and if anything you lose some control over
> locality, in a context where I presume HDFS itself is storing data on
> much more than the 100 Spark nodes.
>
> On Thu, Apr 28, 2016 at 1:34 AM, Michael Gummelt <mgumm...@mesosph
t;
>
> If someone did do this in RawLocalFS, it'd be nice if the patch also
> allowed you to turn off CRC creation and checking.
>
> That's not only part of the overhead, it means that flush() doesn't, not
> until you reach the end of a CRC32 block ... so breaking what few
> durability guarantees POSIX offers.
>
>
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
Has there been any thought or work on this (or any other networked file
system)? It would be valuable to support dynamic allocation without
depending on the shuffle service.
--
Michael Gummelt
Software Engineer
Mesosphere
at I or ViaSat (my employer) gets working we would definitely be
> interested in contributing it back and would very much want to avoid
> maintaining a fork of Spark.
>
> Tony
>
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
I guess ?).
> >
> >
> > Proposal is for 1.6x line to continue to be supported with critical
> fixes; newer features will require 2.x and so jdk8
> >
> > Regards
> > Mridul
> >
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
29 matches
Mail list logo