On Tue, Aug 12, 2014 at 1:17 PM, Thomas Petr wrote:
> That solution would likely cause us more pain -- we'd still need to figure
> out an appropriate amount of resources to request for artifact downloads /
> extractions, our scheduler would need to be sophisticated enough to only
> accept offers
You may already know this, but, this does sound similar to
http://www.mail-archive.com/user@mesos.apache.org/msg00885.html
There was a possible (and partial) solution in using soft limits for memory
for which a ticket was opened.
On Tue, Aug 12, 2014 at 1:17 PM, Thomas Petr wrote:
> That solu
That solution would likely cause us more pain -- we'd still need to figure
out an appropriate amount of resources to request for artifact downloads /
extractions, our scheduler would need to be sophisticated enough to only
accept offers from the same slave that the setup task ran on, and we'd need
Thanks Thomas for the clarification.
One solution you could consider would be separating out the setup
(fetch/extract) phase and running phase into separate mesos tasks. That way
you can give the setup task resources need for fetching/extracting and as
soon as it is done, you can send a TASK_FINIS
Hey Vinod,
We're not using mesos-fetcher to download the executor -- we ensure our
executor exists on the slaves beforehand (during machine provisioning, to
be exact). The issue that Whitney is talking about is OOMing while fetching
artifacts necessary for task execution (like the JAR for a web se
Hi Whitney,
While we could conceivably set the container id in the environment of the
executor, I would like to understand the problem you are facing.
The fetching and extracting of the executor is done in by mesos-fetcher, a
process forked by slave and run under slave's cgroup. AFAICT, this
shou
Hi Niklas,
I want to do this from a custom executor. I think I can accomplish
everything I need as things exist today, however, it would be nice if I
didn't have to make an API call to grab the container id.
However, regarding the general issue, the root cause is sort of discussed
here:
http://ma
Hi Whitney,
Are you thinking an API to do that from within any executor or the
command-executor in particular? The executor won't start before the fetcher
has pulled all artifacts, so wouldn't it be too late to change the cgroups
limits from whiten the executor?
If not, you should be able to exper
We're still seeing sporadic cgroup OOMs due to page cache usage (even with
the 3.4.98 kernel) in the download and untar process of our executor.
One thing I'd like to experiment with is possibly dynamically changing
cgroup memory limits from the executor process itself (since it knows when
it will
9 matches
Mail list logo