On each host you have to set it to the interface that is connected to the
network your cluster is running in.
On 28.04.2015, at 16:41, Nikolay Borodachev nbo...@adobe.com wrote:
Hi Dario,
This could be the reason but why would it not bind to all network interfaces
by default?
To
Been banging my head against this for a while now.
mesos 0.21.0 , marathon 0.7.5, centos 6 servers.
When I enable cgroups (flags are : --cgroups_limit_swap
--isolation=cgroups/cpu,groups/mem ) the memory limits I'm setting
are reflected in memory.soft_limit_in_bytes but not in
Hi Dario,
This could be the reason but why would it not bind to all network interfaces by
default?
To test it out, should I set LIBPROCESS_IP to an IP address of mesos1 server?
Thank you
Nikolay
From: Dario Rexin [mailto:da...@mesosphere.io]
Sent: Tuesday, April 28, 2015 4:31 AM
To:
Looks like we've got a new blocker:
https://issues.apache.org/jira/browse/MESOS-2668
The patch is up for review already, and we'll cut a Mesos-0.22.1-rc6 once
it's in.
Any other patches that need to make it into rc6?
On Fri, Apr 24, 2015 at 5:56 PM, Adam Bordelon a...@mesosphere.io wrote:
Hi
I actually have ‘—ip’ parameter set for both master and slave. So,
LIBPROCESS_IP should only be set for marathon?
From: Dario Rexin [mailto:da...@mesosphere.io]
Sent: Tuesday, April 28, 2015 9:56 AM
To: user@mesos.apache.org
Subject: Re: Marathon chage of leader and stalled deployments
On
On master and slave you should be able to start it with the —ip parameter,
instead of using the env variable. But you should set the IP to a fixed value
for all processes.
On 28 Apr 2015, at 16:52, Nikolay Borodachev nbo...@adobe.com wrote:
Is it for all 3 processes: master, slave, and
Thanks Ian.
Digging around the cgroup there are 3 processes in there;
* the mesos-executor
* the shell script marathon starts the app with
* the actual command to run the task ( a perl app in this case)
The line of code you mention is never run in our case, because it's
wrapped in the
The line of code you cite is so the hard limit is not decreased on a
running container because we can't (easily) reclaim anonymous memory from
running processes. See the comment above the code.
The info-pid.isNone() is for when cgroup is being configured (see the
update() call at the end of
Hi Martin,
do all 3 zookeepers go down with same error logs/cause - there should be
some info as one node failure should not cause ZK to fail (as quorum is
maintained) and remaining nodes should at least show some info from failure
detector.
The original log you posted are after stopping
On 04/28/2015 11:54 AM, Dick Davies wrote:
Thanks Ian.
Digging around the cgroup there are 3 processes in there;
* the mesos-executor
* the shell script marathon starts the app with
* the actual command to run the task ( a perl app in this case)
We've been having discussions about various
Ahh, my bad I should have looked more closely at your version. This was a
bug that was introduced when the memsw functionality came in and then fixed
in 0.22.0.
See:
https://issues.apache.org/jira/browse/MESOS-2128
commit 24cb10a2d68
I suggest upgrading to = 0.22.0 or, if that's not desirable,
That did the trick! Thank you very much, Dario!
From: Dario Rexin [mailto:da...@mesosphere.io]
Sent: Tuesday, April 28, 2015 10:00 AM
To: user@mesos.apache.org
Subject: Re: Marathon chage of leader and stalled deployments
Yes. Unfortunately that’s the only way to set the IP in the Mesos Java
That's what led me into reading the code - neither mem.limit_in_bytes
or mem.memsw.limit_in_bytes
are ever set down from the (insanely high) defaults. I know that
second conditional is false, so the first
must be too, right?
It's likely I'm reading the wrong branch; we're running the 0.21.0
You may very well be right, but I'd like to keep this specific thread
focussed on figuring
out why the expected/implemented behaviour isn't happening in my case
if that's ok.
On 28 April 2015 at 19:26, CCAAT cc...@tampabay.rr.com wrote:
I really hate to be the 'old fashion computer scientist'
Hi Bharath,
As far as I'm aware there are no plans to make Hadoop MRv2 work with the Hadoop
on Mesos framework (https://github.com/mesos/hadoop), unless someone else in
the community is working on this and keeping quiet? We're certainly not working
on this.
My understanding is that
Is it for all 3 processes: master, slave, and marathon?
Thanks
Nikolay
From: Dario Rexin [mailto:da...@mesosphere.io]
Sent: Tuesday, April 28, 2015 9:47 AM
To: user@mesos.apache.org
Subject: Re: Marathon chage of leader and stalled deployments
On each host you have to set it to the interface
Hi Tom,
Thanks for clarifying.
-Bharath
On 28-Apr-2015 2:41 pm, Tom Arnfeld t...@duedil.com wrote:
Hi Bharath,
As far as I'm aware there are no plans to make Hadoop MRv2 work with the
Hadoop on Mesos framework (https://github.com/mesos/hadoop), unless
someone else in the community is
17 matches
Mail list logo