AW: make slaves not getting tasks anymore

2015-12-30 Thread Mike Michel
I am using marathon and from shuai lins answer it still seems that maintenance mode is not the right option for me. I don’t want to marathon move the tasks to another node (phase 1) without user action (restart the task) and it should also not just kill the tasks (phase 2). To be concrete:

AW: make slaves not getting tasks anymore

2015-12-30 Thread Mike Michel
Whitelist seems to be the best option right now. I will try that. thanks Von: Jeremy Olexa [mailto:jol...@spscommerce.com] Gesendet: Mittwoch, 30. Dezember 2015 17:22 An: user@mesos.apache.org Betreff: Re: make slaves not getting tasks anymore Hi Mike, Yes, there is another way

Re: The issue of "Failed to shutdown socket with fd xx: Transport endpoint is not connected" on Mesos master

2015-12-30 Thread Avinash Sridharan
Thanks for the update Nan. k8s enabling firewall rules that would block traffic to the master seems a bit odd. Looks like a bug to me, in the head of the branch. If you are able to reproduce it consistently, could you file an issue against kubernetes mesos. regards, Avinash On Tue, Dec 29, 2015

Re: mesos-master v0.26 crashes for quorum 0

2015-12-30 Thread Adam Bordelon
You should never specify a quorum of 0. For 1 master, you specify quorum of 1. For 3 masters, quorum is 2. For 5 masters, quorum is 3. For 7 masters, quorum is 4. The quorum dictates how many masters (log replicas) have to agree on a fact to win a vote. If you have a quorum of 0, then no masters

Re: mesos, big data and service discovery

2015-12-30 Thread Shuai Lin
What about specifying all non-local instances as "backup" in haproxy.cfg? This way haproxy would only direct traffic to the local instance as long as the local instance is alive. For example, if you plan to use the haproxy-marathon-bridge script, you can modify this line to achieve that:

Re: The issue of "Failed to shutdown socket with fd xx: Transport endpoint is not connected" on Mesos master

2015-12-30 Thread Nan Xiao
Hi Avinash, Sorry for my unclear expression! The root cause is not related to k8s, but the CentOS which k8s is running on. It is related to iptables. After executing "iptables -F", it works! Best Regards Nan Xiao On Wed, Dec 30, 2015 at 11:41 PM, Avinash Sridharan

Re: More filters on /master/tasks enpoint and filters on /master/state

2015-12-30 Thread Adam Bordelon
See also: https://issues.apache.org/jira/browse/MESOS-2258 - Enable filtering of task information in master/state.json https://issues.apache.org/jira/browse/MESOS-2353 - Improve performance of the state.json endpoint for large clusters. https://issues.apache.org/jira/browse/MESOS-2157 - Add

How can mesos print logs from VLOG function?

2015-12-30 Thread Nan Xiao
Hi all, I want mesos prints logs from VLOG function: VLOG(1) << "Executor started at: " << self() << " with pid " << getpid(); But from mesos help: $ sudo ./bin/mesos-master.sh --help | grep -i LOG --external_log_file=VALUESpecified the externally managed log

Re: How can mesos print logs from VLOG function?

2015-12-30 Thread Marco Massenzio
Mesos uses Google Logging[0] and, according to the documentation there, the VLOG(n) calls are only logged if a variable GLOG_v=m (where n > m) is configured when running Mesos (the other suggested way, using --v=m won't work for mesos). Having said that, I have recently been unable to make this

make slaves not getting tasks anymore

2015-12-30 Thread Mike Michel
Hi, i need to update slaves from time to time and looking for a way to take them out of the cluster but without killing the running tasks. I need to wait until all tasks are done and during this time no new tasks should be started on this slave. My first idea was to set a constraint

Re: How can mesos print logs from VLOG function?

2015-12-30 Thread Nan Xiao
Hi Marco, Yes, it worked, thanks very much! Best Regards Nan Xiao On Wed, Dec 30, 2015 at 4:55 PM, Marco Massenzio wrote: > Mesos uses Google Logging[0] and, according to the documentation there, the > VLOG(n) calls are only logged if a variable GLOG_v=m (where n > m) is

Re: make slaves not getting tasks anymore

2015-12-30 Thread Klaus Ma
Hi Mike, Which framework are you using? How about Maintenance's scheduling feature? My understanding is that framework show not dispatch task to the maintenance agent; so Operator can wait for all tasks finished before taking any action. For "When maintenance is triggered by the operator", it's

Re: make slaves not getting tasks anymore

2015-12-30 Thread Shuai Lin
> > I need to wait until all tasks are done and during this time no new tasks > should be started on this slave This is exactly what maintenance mode is designed for. But to achieve this, it requires the cooperation of the framework. When the operator adds a maintenance schedule for a slave,

Re: make slaves not getting tasks anymore

2015-12-30 Thread Dick Davies
It sounds like you want to use checkpointing, that should keep the tasks alive as you update the mesos slave process itself. On 30 December 2015 at 11:43, Mike Michel wrote: > Hi, > > > > i need to update slaves from time to time and looking for a way to take them > out of