That can be modified with the right values for gc_delay.

I'm running a very basic test test where I accept a request, write a files
to the sandbox, sleep for 100s, then exit. After exit, I probe the next
offer.

Having not specified any value for disk_watch_interval and assuming it is
the default 60s, the new offer should have disk = (Original value - size of
file i wrote to sandbox), right? Am i missing something here?

Arjun

On Fri, Feb 12, 2016 at 5:05 PM, Chong Chen <chong.ch...@huawei.com> wrote:

> Hi,
>
> I think the garbage collector of Mesos agent will remove the directory of
> the finished task.
>
> Thanks!
>
>
>
> *From:* Arkal Arjun Rao [mailto:aa...@ucsc.edu]
> *Sent:* Friday, February 12, 2016 4:22 PM
> *To:* user@mesos.apache.org
> *Subject:* Re: Updated agent resources with every offer.
>
>
>
> Hi Vinod,
>
>
>
> Thanks for the reply. I think I understand what you mean. Could you
> clarify these follow-up questions?
>
>
>
> 1. So if I did write to the sandbox, mesos would know and send the correct
> offer?
>
> 2. And if so, and this might be hacky, if i bind mounted my docker folder
> (where all cached images are stored) into a sandbox directory, do you think
> Mesos will register the correct state of the disk in the offer? (Suppose I
> were to spawn a possibly persistent job that requests 0 cores, 0 memory and
> 0gb and use it's sandbox)
>
>
>
> Thanks again,
>
> Arjun
>
>
>
> On Fri, Feb 12, 2016 at 4:08 PM, Vinod Kone <vinodk...@apache.org> wrote:
>
> If your job is writing stuff outside the sandbox it is up to your
> framework to do that resource accounting. It is really tricky for Mesos to
> do that. For example, the second job might be launched even before the
> first one finishes.
>
>
>
> On Fri, Feb 12, 2016 at 3:46 PM, Arkal Arjun Rao <aa...@ucsc.edu> wrote:
>
> Hi All,
>
>
>
> I'm new to Mesos and I'm working on a  framework that strongly considers
> the disk value in an offer before making a decision. My jobs don't run in
> the agent's sandbox and may use docker to pull images from my dockerhub and
> run containers on input data downloaded from S3.
>
>
>
> My jobs clean up after themselves but do not delete the cached docker
> images after they complete so a later job can use them directly without the
> delay of downloading the image again. I cannot predict how much a job will
> leave behind.
>
>
>
> Leaving behind files after the job means that the disk space available for
> the next job is less than the disk value the current job had when it
> started. However the offer made to the master does not appear to update the
> disk parameter before making the new offer. Is there any way to get the
> executor driver to update the value passed in the disk field of resource
> offers?
>
>
>
> Here's a Stack overflow with more details
> http://stackoverflow.com/questions/35354841/setup-mesos-to-provide-up-to-date-disk-in-offers
>
>
>
> Thanks in advance,
>
> Arjun Arkal Rao
>
>
>
> PhD Candidate,
>
> Haussler Lab,
>
> UC Santa Cruz,
>
> USA
>
>
>
>
>
>
>
>
>
> --
>
> Arjun Arkal Rao
>
>
>
> PhD Student,
>
> Haussler Lab,
>
> UC Santa Cruz,
>
> USA
>
>
>
> aa...@ucsc.edu
>
>
>



-- 
Arjun Arkal Rao

PhD Student,
Haussler Lab,
UC Santa Cruz,
USA

aa...@ucsc.edu

Reply via email to