That's correct, the slave maintains a mapping of ContainerID <> Executor.
You can see the code here
https://github.com/apache/mesos/blob/master/src/slave/slave.cpp#L3840-L3845
that just generates a UUID and keeps track of it.

On 5 December 2014 at 09:55, Diptanu Choudhury <dipta...@gmail.com> wrote:

> Great that answers what I was asking. For us when we start a process via
> Mesos, it's always going to run in a new docker container. We were
> basically using an executor to basically control the lifecycle of a docker
> container. So now since ECP would be doing it, we don't need any custom
> executor at all, so we will be using the default executor.
>
> Also I see that most of protobufs ECP gets from the EC, there is a
> ContainerID being passed as the primary identifier - I am guessing that's
> something Mesos Slave generates. Is there a 1:1 relation with TaskID and
> ContainerID? So when a framework wants to kill a Task and sends the TaskID
> protobuf via the driver.killTask API, does Mesos Slave translates the
> taskid to the appropriate ContainerID and sends to ECP via the EC?
>
> On Thu, Dec 4, 2014 at 3:14 PM, Tom Arnfeld <t...@duedil.com> wrote:
>
>> @Sharma No this thread is correct, I should have been more specific.
>>
>> To simplify, if you use a new executor (or the default one) for each
>> task, it will be launched inside of a new container. In this situation, the
>> *update* method on the ECP will never be called as far as I can tell.
>>
>> On 3 December 2014 at 19:47, Tim Chen <t...@mesosphere.io> wrote:
>>
>>> Forgot to mention, unless you have a custom executor that you launch as
>>> a docker container (by putting DockerInfo in the ExecutorInfo in your
>>> TaskInfo), you can then re-use that executor for multiple tasks.
>>>
>>> Tim
>>>
>>> On Wed, Dec 3, 2014 at 11:47 AM, Tim Chen <t...@mesosphere.io> wrote:
>>>
>>>> Hi Sharma,
>>>>
>>>> Yes currently docker doesn't really support (out-of-box) launching
>>>> multiple processes in the same container. They just recently added docker
>>>> exec but not quite clear how it's best fit in mesos integration yet.
>>>>
>>>> So each task run in the Docker containerizer has to be a seperate
>>>> container for now.
>>>>
>>>> Tim
>>>>
>>>> On Wed, Dec 3, 2014 at 11:09 AM, Sharma Podila <spod...@netflix.com>
>>>> wrote:
>>>>
>>>>> Yes, although, there's a nuance to this specific situation. Here, the
>>>>> same executor is being used for multiple tasks, but, the executor is
>>>>> launching a different Docker container for each task. I was extending the
>>>>> coarse grain allocation concept to within the executor (which is in a fine
>>>>> grained allocation model).
>>>>> What you mention, we do use already for a different framework, not the
>>>>> one Diptanu is talking about.
>>>>>
>>>>> On Wed, Dec 3, 2014 at 11:04 AM, Connor Doyle <con...@mesosphere.io>
>>>>> wrote:
>>>>>
>>>>>> You're right Sharma, it's dependent upon the framework.  If your
>>>>>> scheduler sets a unique ExecutorID for each TaskInfo, then the executor
>>>>>> will not be re-used and you won't have to worry about resizing the
>>>>>> executor's container to accomodate subsequent tasks.  This might be a
>>>>>> reasonable simplification to start with, especially if your executor adds
>>>>>> relatively low resource overhead.
>>>>>> --
>>>>>> Connor
>>>>>>
>>>>>>
>>>>>> > On Dec 3, 2014, at 10:20, Sharma Podila <spod...@netflix.com>
>>>>>> wrote:
>>>>>> >
>>>>>> > This may have to do with fine-grain Vs coarse-grain resource
>>>>>> allocation. Things may be easier for you, Diptanu, if you are using one
>>>>>> Docker container per task (sort of coarse grain). In that case, I believe
>>>>>> there's no need to alter a running Docker container's resources. Instead,
>>>>>> the resource update of your executor translates into the right Docker
>>>>>> containers running. There's some details to be worked out there, I am 
>>>>>> sure.
>>>>>> > It sounds like Tom's strategy uses the same Docker container for
>>>>>> multiple tasks. Tom, do correct me otherwise.
>>>>>> >
>>>>>> > On Wed, Dec 3, 2014 at 3:38 AM, Tom Arnfeld <t...@duedil.com> wrote:
>>>>>> > When Mesos is asked to a launch a task (with either a custom
>>>>>> Executor or the built in CommandExecutor) it will first spawn the 
>>>>>> executor
>>>>>> which _has_ to be a system process, launched via command. This process 
>>>>>> will
>>>>>> be launched inside of a Docker container when using the previously
>>>>>> mentioned containerizers.
>>>>>> >
>>>>>> > Once the Executor registers with the slave, the slave will send it
>>>>>> a number of launchTask calls based on the number of tasks queued up for
>>>>>> that executor. The Executor can then do as it pleases with those tasks,
>>>>>> whether it's just a sleep(1) or to spawn a subprocess and do some other
>>>>>> work. Given it is possible for the framework to specify resources for 
>>>>>> both
>>>>>> tasks and executors, and the only thing which _has_ to be a system 
>>>>>> process
>>>>>> is the executor, the mesos slave will limit the resources of the executor
>>>>>> process to the sum of (TaskInfo.Executor.Resources + TaskInfo.Resources).
>>>>>> >
>>>>>> > Mesos also has the ability to launch new tasks on an already
>>>>>> running executor, so it's important that mesos is able to dynamically 
>>>>>> scale
>>>>>> the resource limits up and down over time. Designing a framework around
>>>>>> this idea can lead to some complex and powerful workflows which would be 
>>>>>> a
>>>>>> lot more complex to build without Mesos.
>>>>>> >
>>>>>> > Just for an example... Spark.
>>>>>> >
>>>>>> > 1) User launches a job on spark to map over some data
>>>>>> > 2) Spark launches a first wave of tasks based on the offers it
>>>>>> received (let's say T1 and T2)
>>>>>> > 3) Mesos launches executors for those tasks (let's say E1 and E2)
>>>>>> on different slaves
>>>>>> > 4) Spark launches another wave of tasks based on offers, and tells
>>>>>> mesos to use the same executor (E1 and E2)
>>>>>> > 5) Mesos will simply call launchTasks(T{3,4}) on the two already
>>>>>> running executors
>>>>>> >
>>>>>> > At point (3) mesos is going to launch a Docker container and
>>>>>> execute your executor. However at (5) the executor is already running so
>>>>>> the tasks will be handed to the already running executor.
>>>>>> >
>>>>>> > Mesos will guarantee you (i'm 99% sure) that the resources for your
>>>>>> container have been updated to reflect the limits set on the tasks before
>>>>>> handing the tasks to you.
>>>>>> >
>>>>>> > I hope that makes some sense!
>>>>>> >
>>>>>> > --
>>>>>> >
>>>>>> > Tom Arnfeld
>>>>>> > Developer // DueDil
>>>>>> >
>>>>>> >
>>>>>> > On Wed, Dec 3, 2014 at 10:54 AM, Diptanu Choudhury <
>>>>>> dipta...@gmail.com> wrote:
>>>>>> >
>>>>>> > Thanks for the explanation Tom, yeah I just figured that out by
>>>>>> reading your code! You're touching the memory.soft_limit_in_bytes and
>>>>>> memory.limit_in_bytes directly.
>>>>>> >
>>>>>> > Still curios to understand in which situations Mesos Slave would
>>>>>> call the external containerizer to update the resource limits of a
>>>>>> container? My understanding was that once resource allocation happens 
>>>>>> for a
>>>>>> task, resources are not taken away until the task exits[fails, crashes or
>>>>>> finishes] or Mesos asks the slave to kill the task.
>>>>>> >
>>>>>> > On Wed, Dec 3, 2014 at 2:47 AM, Tom Arnfeld <t...@duedil.com> wrote:
>>>>>> > Hi Diptanu,
>>>>>> >
>>>>>> > That's correct, the ECP has the responsibility of updating the
>>>>>> resource for a container, and it will do as new tasks are launched and
>>>>>> killed for an executor. Since docker doesn't support this, our
>>>>>> containerizer (Deimos does the same) goes behind docker to the cgroup for
>>>>>> the container and updates the resources in a very similar way to the
>>>>>> mesos-slave. I believe this is also what the built in Docker 
>>>>>> containerizer
>>>>>> will do.
>>>>>> >
>>>>>> >
>>>>>> https://github.com/duedil-ltd/mesos-docker-containerizer/blob/master/containerizer/commands/update.py#L35
>>>>>> >
>>>>>> > Tom.
>>>>>> >
>>>>>> > --
>>>>>> >
>>>>>> > Tom Arnfeld
>>>>>> > Developer // DueDil
>>>>>> >
>>>>>> >
>>>>>> > On Wed, Dec 3, 2014 at 10:45 AM, Diptanu Choudhury <
>>>>>> dipta...@gmail.com> wrote:
>>>>>> >
>>>>>> > Hi,
>>>>>> >
>>>>>> > I had a quick question about the external containerizer. I see that
>>>>>> once the Task is launched, the ECP can receive the update calls, and the
>>>>>> protobuf message passed to ECP with the update call is
>>>>>> containerizer::Update.
>>>>>> >
>>>>>> > This protobuf has a Resources [list] field so does that mean Mesos
>>>>>> might ask a running task to re-adjust the enforced resource limits?
>>>>>> >
>>>>>> > How would this work if the ECP was launching docker containers
>>>>>> because Docker doesn't allow changing the resource limits once the
>>>>>> container has been started?
>>>>>> >
>>>>>> > I am wondering how does Deimos and mesos-docker-containerizer
>>>>>> handle this.
>>>>>> >
>>>>>> > --
>>>>>> > Thanks,
>>>>>> > Diptanu Choudhury
>>>>>> > Web - www.linkedin.com/in/diptanu
>>>>>> > Twitter - @diptanu
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > --
>>>>>> > Thanks,
>>>>>> > Diptanu Choudhury
>>>>>> > Web - www.linkedin.com/in/diptanu
>>>>>> > Twitter - @diptanu
>>>>>> >
>>>>>> >
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>
> --
> Thanks,
> Diptanu Choudhury
> Web - www.linkedin.com/in/diptanu
> Twitter - @diptanu <http://twitter.com/diptanu>
>

Reply via email to