I think gokula isn't using mesos at all atm and is researching if there are
better options than his current failover environment.
Under the above assumption:
To answer gokula, yes mesos would allow you to use resources of multiple
machines, however I think the overhead of running multiple mesos
This depends if you have isolation enabled, i.e., cgroups. If you do not
have isolation enabled, so just the posix isolator, it will just run fine.
In this case when the Spark Executor is idle, Mesos should show zero
allocated resources whilst in fact the SparkExecutor process is fact still
taking
Hi,
This is a hard problem to solve atm if your requirement is that you really need
Spark to operate in Coarse-grained mode.
I assume this is a problem because you are trying to run two spark-applications
(as apposed to two jobs in one applications).
Obvious “solutions” would be that you could
y maintain fairness
> via revocation. In that world, you should see weighted fairness maintained,
> and you would use quota and/or reservations to provide guarantees to
> frameworks.
>
> Hope that helps you diagnose further and get some context on the (current)
> caveat
Hi,
While investigating fairness possibilities with Mesos for Spark workloads I’m
trying to achieve for example a 4:1 weight ratio for two frameworks.
Imagine a system with two Spark frameworks (in fine-grained mode if you’re
familiar with Spark) and I want one the two frameworks to get four ti
Hi,
My environment:
- GCC 4.9 is installed in a non-standard way. (can’t change this) and uses the
LD_LIBRARY_PATH to run/compile correctly
Mesos executables have the /usr/lib64 path in their RPATHs.
The problem is that the default libstdc++ (in /usr/lib64) is now used instead
of the libstd
Wanted to add that, even if there wasn’t a preview package, you can clone from
GIT, and checkout a tag, where in this case v1.5.0-rc1 is tagged. Then
proceeded normally as you would’ve had a source distro as described in the
already mentioned http://spark.apache.org/docs/latest/building-spark.ht
Can anyone tell how the Mesos allocation algorithm works:
Does Mesos offer every free resource it has to one framework at a time? Or does
the allocator divide the max offer size by the amount of active/registered
frameworks?
and
in case of:
FW1 has a high dominant resource fraction (>50%), wh
Have you inspected the framework page/tab in the Mesos master web UI? Perhaps,
as you already suspect, DRF is only handing out resources to frameworks which
have a lower dominant resource. So you could check if your spark instance has a
high dominant resource due to the executors taking up a lot
allocation.
If I use the (event) log files of master and slaves, I can’t see usage, and
retrieving/calculating max share per framework is going to be tough, I guess.
Does someone have a solution or idea to plotting the above three listed metrics
accurately.
Regards,
Hans van den Bogert
Thanks Alex and Vino, very clear.
Hans
On 30 Jun 2015, at 19:36, Vinod Kone wrote:
> To clarify Alex's response. An executor is not shutdown if it has no running
> tasks. It is only shutdown when the framework asks it to (or the framework
> itself shuts down).
ch task terminates or is killed, the corresponding
> executor shuts down as well.
>
> On Tue, Jun 30, 2015 at 12:08 PM, Hans van den Bogert
> wrote:
> I have difficulty understanding Mesos’ model.
>
> A framework can, for every accepted resource offer, mention an executor
> besi
resource offers are used (shortly after each other), which
happen to be of the same slave, then are two executors started at one point? Or
is the second batch of tasks given to the first started executor?
I hope my question is clear, if not, let me know,
Hans van den Bogert
13 matches
Mail list logo