Thanks Tom.
Another question - say the ECP's wait method was invoked and after some
time the ECP crashed while the wait is still blocked, so the process would
exit without returning the Termination protobuf to EC. In that case will
the EC again call wait on ECP?
On Fri, Dec 5, 2014 at 2:47 AM, T
That's correct, the slave maintains a mapping of ContainerID <> Executor.
You can see the code here
https://github.com/apache/mesos/blob/master/src/slave/slave.cpp#L3840-L3845
that just generates a UUID and keeps track of it.
On 5 December 2014 at 09:55, Diptanu Choudhury wrote:
> Great that ans
Great that answers what I was asking. For us when we start a process via
Mesos, it's always going to run in a new docker container. We were
basically using an executor to basically control the lifecycle of a docker
container. So now since ECP would be doing it, we don't need any custom
executor at
@Sharma No this thread is correct, I should have been more specific.
To simplify, if you use a new executor (or the default one) for each task,
it will be launched inside of a new container. In this situation, the
*update* method on the ECP will never be called as far as I can tell.
On 3 December
Hi Sharma,
Yes currently docker doesn't really support (out-of-box) launching multiple
processes in the same container. They just recently added docker exec but
not quite clear how it's best fit in mesos integration yet.
So each task run in the Docker containerizer has to be a seperate container
Forgot to mention, unless you have a custom executor that you launch as a
docker container (by putting DockerInfo in the ExecutorInfo in your
TaskInfo), you can then re-use that executor for multiple tasks.
Tim
On Wed, Dec 3, 2014 at 11:47 AM, Tim Chen wrote:
> Hi Sharma,
>
> Yes currently dock
Yes, although, there's a nuance to this specific situation. Here, the same
executor is being used for multiple tasks, but, the executor is launching a
different Docker container for each task. I was extending the coarse grain
allocation concept to within the executor (which is in a fine grained
all
You're right Sharma, it's dependent upon the framework. If your scheduler sets
a unique ExecutorID for each TaskInfo, then the executor will not be re-used
and you won't have to worry about resizing the executor's container to
accomodate subsequent tasks. This might be a reasonable simplificat
This may have to do with fine-grain Vs coarse-grain resource allocation.
Things may be easier for you, Diptanu, if you are using one Docker
container per task (sort of coarse grain). In that case, I believe there's
no need to alter a running Docker container's resources. Instead, the
resource updat
When Mesos is asked to a launch a task (with either a custom Executor or the
built in CommandExecutor) it will first spawn the executor which _has_ to be a
system process, launched via command. This process will be launched inside of a
Docker container when using the previously mentioned contain
Thanks for the explanation Tom, yeah I just figured that out by reading
your code! You're touching the memory.soft_limit_in_bytes and
memory.limit_in_bytes directly.
Still curios to understand in which situations Mesos Slave would call the
external containerizer to update the resource limits of a
Hi Diptanu,
That's correct, the ECP has the responsibility of updating the resource for a
container, and it will do as new tasks are launched and killed for an executor.
Since docker doesn't support this, our containerizer (Deimos does the same)
goes behind docker to the cgroup for the conta
Hi,
I had a quick question about the external containerizer. I see that once
the Task is launched, the ECP can receive the update calls, and the
protobuf message passed to ECP with the update call is containerizer::Update
.
This protobuf has a Resources [list] field so does that mean Mesos might
13 matches
Mail list logo