I think shared persistent volumes are a great feature addition!

Couple things that need to be addressed

--> Who gets charged for the shared disk quota if multiple tasks share
them? I'm assuming the one who creates a RW copy?

--> How does the resource handling logic work in Mesos if a resource is
re-offered while still in use? I think this is the piece that needs some
major thinking.

It's worthwhile to start a design doc around this. Anindya, are you up for
the task?

Thanks,

On Mon, Sep 21, 2015 at 11:00 AM, Anindya Sinha <anindya.si...@gmail.com>
wrote:

> Now that mesos has introduced support for persistent volumes, the
> persistent volumes that are created on a specific slave is offered to the
> framework(s) as another resource. As a result, a task which needs RW access
> to that persistent volume can use that resource.
>
> Till this task is running, that persistent volume cannot be offered to the
> framework(s) and hence would not be available for another task running on
> the same agent.
>
> Let us consider an use case wherein a service needs access to the same
> persistent volume from multiple task instances running on the same agent
> simultaneously, ie. when both (or multiple) of such task instances are
> RUNNING. Since the persistent volume is not offered as a resource to the
> framework(s) till a task that has grabbed it is still active, the
> subsequent instances needed access to the same persistent volume is not
> feasible.
>
> To alleviate that scenario, we propose making persistent volumes "sharable"
> (as optional). Default behavior can still be "non-sharable" but frameworks
> may want to CREATE persistent volumes as "sharable" (which would need an
> optional field in Resources.DiskInfo.Persistence). Hence, we would allow
> "sharable" persistent volumes be offered as resources to the framework(s)
> [matching role] even if it has been grabbed by a task already running on
> the agent, so as that subsequent tasks can use that persistent volume.
>
> Comments/concerns?
>
> Thanks
>
> Anindya
>

Reply via email to