Alex and Marco,

Thanks very much for your really helpful explanations.

For better or worse, neither cpp nor Python are my things; Java's the go-to
language for me.

Cordially,

Paul

On Sat, Aug 29, 2015 at 5:23 AM, Marco Massenzio <ma...@mesosphere.io>
wrote:

> Hi Paul,
>
> +1 to what Alex/Tim say.
>
> Maybe a (simple) example will help: a very basic "framework" I created
> recently, does away with the "Executor" and only uses the "Scheduler",
> sending a CommandInfo structure to Mesos' Agent node to execute.
>
> See:
> https://github.com/massenz/mongo_fw/blob/develop/src/mongo_scheduler.cpp#L124
>
> If Python is more your thing, there are examples in the Mesos repository,
> or you can take a look at something I started recently to use the new
> (0.24) HTTP API (NOTE - this is still very much still WIP):
> https://github.com/massenz/zk-mesos/blob/develop/notebooks/HTTP%20API%20Tests.ipynb
>
> *Marco Massenzio*
>
> *Distributed Systems Engineerhttp://codetrips.com <http://codetrips.com>*
>
> On Fri, Aug 28, 2015 at 8:44 AM, Paul Bell <arach...@gmail.com> wrote:
>
>> Alex & Tim,
>>
>> Thank you both; most helpful.
>>
>> Alex, can you dispel my confusion on this point: I keep reading that a
>> "framework" in Mesos (e.g., Marathon) consists of a scheduler and an
>> executor. This reference to "executor" made me think that Marathon must
>> have *some* kind of presence on the slave node. But the more familiar I
>> become with Mesos the less likely this seems to me. So, what does it mean
>> to talk about the Marathon framework "executor"?
>>
>> Tim, I did come up with a simple work-around that involves re-copying the
>> needed file into the container each time the application is started. For
>> reasons unknown, this file is not kept in a location that would readily
>> lend itself to my use of persistent storage (Docker -v). That said, I am
>> keenly interested in learning how to write both custom executors &
>> schedulers. Any sense for what release of Mesos will see "persistent
>> volumes"?
>>
>> Thanks again, gents.
>>
>> -Paul
>>
>>
>>
>> On Fri, Aug 28, 2015 at 2:26 PM, Tim Chen <t...@mesosphere.io> wrote:
>>
>>> Hi Paul,
>>>
>>> We don't [re]start a container since we assume once the task terminated
>>> the container is no longer reused. In Mesos to allow tasks to reuse the
>>> same executor and handle task logic accordingly people will opt to choose
>>> the custom executor route.
>>>
>>> We're working on a way to keep your sandbox data beyond a container
>>> lifecycle, which is called persistent volumes. We haven't integrated that
>>> with Docker containerizer yet, so you'll have to wait to use that feature.
>>>
>>> You could also choose to implement a custom executor for now if you like.
>>>
>>> Tim
>>>
>>> On Fri, Aug 28, 2015 at 10:43 AM, Alex Rukletsov <a...@mesosphere.com>
>>> wrote:
>>>
>>>> Paul,
>>>>
>>>> that component is called DockerContainerizer and it's part of Mesos
>>>> Agent (check
>>>> "/Users/alex/Projects/mesos/src/slave/containerizer/docker.hpp"). @Tim,
>>>> could you answer the "docker start" vs. "docker run" question?
>>>>
>>>> On Fri, Aug 28, 2015 at 1:26 PM, Paul Bell <arach...@gmail.com> wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> I first posted this to the Marathon list, but someone suggested I try
>>>>> it here.
>>>>>
>>>>> I'm still not sure what component (mesos-master, mesos-slave,
>>>>> marathon) generates the "docker run" command that launches containers on a
>>>>> slave node. I suppose that it's the framework executor (Marathon) on the
>>>>> slave that actually executes the "docker run", but I'm not sure.
>>>>>
>>>>> What I'm really after is whether or not we can cause the use of
>>>>> "docker start" rather than "docker run".
>>>>>
>>>>> At issue here is some persistent data inside
>>>>> /var/lib/docker/aufs/mnt/<CTR_ID>. "docker run" will by design (re)launch
>>>>> my application with a different CTR_ID effectively rendering that data
>>>>> inaccessible. But "docker start" will restart the container and its "old"
>>>>> data will still be there.
>>>>>
>>>>> Thanks.
>>>>>
>>>>> -Paul
>>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to