If I understand right, the root trouble is mesos-slave-in-docker. I know
CoreOS little, do u run mesos-slave on CoreOS as following?
docker run --rm -it --name mesos-slave --net host
--volume /var/run/docker.sock:/var/run/docker.sock --entrypoint mesos-slave
or map the CoreOS docker.sock in
Thanks Alex and Vino, very clear.
Hans
On 30 Jun 2015, at 19:36, Vinod Kone wrote:
> To clarify Alex's response. An executor is not shutdown if it has no running
> tasks. It is only shutdown when the framework asks it to (or the framework
> itself shuts down).
Having the knowledge of tasks pending in the frameworks, at least via the
offer filters specifying minimum resource sizes, could prove useful. And
roles+weights would be complementary. This might remove the need to use
dynamic reservations for every framework that uses more than the smallest
size r
To clarify Alex's response. An executor is not shutdown if it has no
running tasks. It is only shutdown when the framework asks it to (or the
framework itself shuts down).
Hi,
One really awesome feature would be to allow the framework sorter (or
perhaps another component of the allocator) to see the size of all tasks
(in terms of resources used) owned by a particular framework or role.
If the sorter module knew the distribution of task sizes for each
framework/rol
Yes, alternative allocator module will be great in terms of implementation,
but adding more capabilities to "filters" might be required to convey some
more info to the Mesos scheduler/allocator. Am I correct here or are there
already ways to convey such info ?
Thanks,
Dharmesh
On Tue, Jun 30, 201
One option is to implement alternative behaviour in an allocator module.
On Tue, Jun 30, 2015 at 3:34 PM, Dharmesh Kakadia
wrote:
> Interesting.
>
> I agree, that dynamic reservation and optimistic offers will help mitigate
> the issue, but the resource fragmentation (and starvation due to that)
An executor is terminated by Mesos if it misbehaves (e.g. sends
TASK_STAGING updates or uses too much memory), killed by an
oversubscription QoSController, a framework shuts down, or a scheduler
sends a scheduler::Call::Shutdown request to Mesos. Note that an executor
may also fail or decide to com
Interesting.
I agree, that dynamic reservation and optimistic offers will help mitigate
the issue, but the resource fragmentation (and starvation due to that) is a
more general problem. Predictive models can certainly aid the Mesos
scheduler here. I think the filters in Mesos can be extended to ad
How would this be different from using mesos-dns from the app’s perspective? It
uses DNS interface, thus the app should be able to resolve SRV records, right?
So if I insist on that the resolution of the provided dns name (for the service
it depends on) it should be transient to the app, then
Would not using Bamboo to update haproxy config have the same problems I
described for the Marathon provided script? It would still run in a separate
container.
From: zhou weitao [mailto:zhouwtl...@gmail.com]
Sent: Monday, June 29, 2015 10:51 PM
To: user@mesos.apache.org
Subject: Re: service
This solution works when the app is being launched. But if the dependency goes
down and re-launches at other coordinates, the app depending on it will break,
unless it has the knowledge of how to look the dependency up via SRV which I
wanted to avoid. This could be solved by running haproxy insi
We're using it for streaming realtime logs to the framework. In our short-lived
framework for building Docker images, the executor streams back stdout/stderr
logs from the build to the client for ease of use/debugging and the
executor->framework best-effort messaging stuff made this effortless.
Exactly what I needed to know, one follow-up question though:
> An executor is terminated by Mesos if it has no running tasks
Does this mean there is some timeout? Or does the “parent” framework actively
have to give a command to shutdown the executor? Because using Spark in
fine-grained mode for
There are two types of tasks: (1) those that specify an executor and (2)
those, that specify a command.
When a task of ttype (1) arrives to a slave, the slave checks whether an
executor with the same executorID already exists on this slave. If yes, the
task is redirected to the executor; if not, t
It depends on the framework, Mesos imposes no rules on the relationship between
a task an executor. The framework can specify an Executor ID with the tasks
it's submitting, and if two tasks land on the same slave with the same Executor
ID, Mesos will take care of ensuring they share the same Exe
Every offer have a id. Once an offer is accepted, it could not used by
others. If you use same offer id to start two executors, you would get an
error.
On Tue, Jun 30, 2015 at 6:08 PM, Hans van den Bogert
wrote:
> I have difficulty understanding Mesos’ model.
>
> A framework can, for every accep
I have difficulty understanding Mesos’ model.
A framework can, for every accepted resource offer, mention an executor
besides the tasks descriptions it submits to Mesos. However does every use of
offered resources, start a new executor? Thus for instance if the scenario
occurs that two resourc
Toy go app that queries SRV records, generates environment variables and
injects them into an exec'd command line:
https://github.com/jdef/srv2env
If you wanted template support you could pair this with confd and use the
env var backend.
If you're running tasks on marathon, when they die there i
+1 for mesos-consul
We've been using it to great effect!
From: Dave Lester [d...@davelester.org]
Sent: 30 June 2015 06:38
To: user@mesos.apache.org
Subject: Re: service discovery in Mesos on CoreOS
It would be great to have a documentation page devoted to compili
20 matches
Mail list logo