Re: How to filter GET_TASKS api result
> On Apr 19, 2017, at 5:00 PM, Benjamin Mahler wrote: > > We can add a Call.GetTasks message to allow you to specify which task ids you > would like to retrieve. But this isn't supported yet, the code needs to be > written. E.g. > > message Call { > enum Type { > GET_TASKS = 13; // Retrieves the information about tasks, see > `GetTasks` below. > } > > message GetTasks { > // Which tasks to retrieve, leave empty to retrieve all tasks. > repeated TaskID task_ids; > } > } See also https://issues.apache.org/jira/browse/MESOS-6935. It makes sense to be able to ask for specific FrameworkIDs too. > > On Thu, Apr 6, 2017 at 8:31 PM, 梦开始的地方 <382607...@qq.com> wrote: > > but spark and chronos has too many short tasks,get all task is too slow. > > -- 原始邮件 -- > 发件人: "Alexander Rojas";; > 发送时间: 2017年4月3日(星期一) 晚上9:47 > 收件人: "user"; > 主题: Re: How to filter GET_TASKS api result > > Hi, > > Mesos does not have a way to get info about a single task, however the answer > should be pretty easy to filter so you can search for the task you’re looking > for. > > Alexander Rojas > alexan...@mesosphere.io > > > > >> On 20 Mar 2017, at 10:35, 梦开始的地方 <382607...@qq.com> wrote: >> >> Hi,I'd like to use the GET_TASKS api get specific task ,but the api return >> all tasks. >> please help me,thanks >> > >
Re: How to filter GET_TASKS api result
We can add a Call.GetTasks message to allow you to specify which task ids you would like to retrieve. But this isn't supported yet, the code needs to be written. E.g. message Call { enum Type { GET_TASKS = 13; // Retrieves the information about tasks, see `GetTasks` below. } message GetTasks { // Which tasks to retrieve, leave empty to retrieve all tasks. repeated TaskID task_ids; } } On Thu, Apr 6, 2017 at 8:31 PM, 梦开始的地方 <382607...@qq.com> wrote: > > but spark and chronos has too many short tasks,get all task is too slow. > > -- 原始邮件 -- > *发件人:* "Alexander Rojas";; > *发送时间:* 2017年4月3日(星期一) 晚上9:47 > *收件人:* "user"; > *主题:* Re: How to filter GET_TASKS api result > > Hi, > > Mesos does not have a way to get info about a single task, however the > answer should be pretty easy to filter so you can search for the task > you’re looking for. > > Alexander Rojas > alexan...@mesosphere.io > > > > > On 20 Mar 2017, at 10:35, 梦开始的地方 <382607...@qq.com> wrote: > > Hi,I'd like to use the GET_TASKS api get specific task ,but the api return > all tasks. > please help me,thanks > > >
Re: [Design doc] RPC: Fault domains in Mesos
Hi Maxime, Thanks for the feedback! The proposed approach is definitely simplistic. The "Discussion" section of the design doc describes some of the rationale for starting with a very simple scheme: basically, because (a) we want to assign clear semantics to the levels of the hierarchy (regions are far away from each other and inter-region network links have high latency; racks are close together and inter-rack network links have low latency). (b) we don't want to make life too difficult for framework authors. (c) most server software (e.g., HDFS, Kafka, Cassandra, etc.) only understands a simple hierarchy -- in many cases, just a single level ("racks"), or occasionally two levels ("racks" and "DCs"). Can you elaborate on the use-cases that you see for a more complex hierarchy of fault domains? I'd be happy to chat off-list if you'd prefer. Thanks! Neil On Tue, Apr 18, 2017 at 1:33 AM, Maxime Brugidou wrote: > Hi Neil, > > I really like the idea of incorporating the concept of fault domains in > Mesos, however I feel like the implementation proposed is a bit narrow to be > actually useful for most users. > > I feel like we could make the fault domains definition more generic. As an > example in our setup we would like to have something like Region > Building >> Cage > Pod > Rack. Failure domains would be hierarchically arranged > (meaning one domain in a lower level can only be included in one domain > above). > > As a concrete example, we could have the mesos masters be aware of the fault > domain hierarchy (with a config map for example), and slaves would just need > to declare their lowest-level domain (for example their rack id). Then > frameworks could use this domain hierarchy at will. If they need to "spread" > their tasks for a very highly available setup, they could first spread using > the highest fault domain (like the region), then if they have enough tasks > to launch they could spread within each sub-domain recursively until they > run out of tasks to spread. We do not need to artificially limit the number > of levels of fault domains and the name of the fault domains. Schedulers do > not need to know the names either, just the hierarchy. > > Then, to provide the other feature of "remote" slaves that you describe, we > could configure the mesos master to only send offers from a "default" local > fault domain, and frameworks would need to advertise a certain capability to > receive offers for other remote fault domains. > > I feel we could implement this by identifying a fault domain with a simple > list of ids like ["US-WEST-1", "Building 2", "Cage 3", "POD 12", "Rack 3"] > or ["US-EAST-2", "Building 1"]. Slaves would advertise their lowest-level > fault domains and schedulers could use this arbitrarily as a hierarchical > list. > > Thanks, > Maxime > > On Mon, Apr 17, 2017 at 6:45 PM Neil Conway wrote: >> >> Folks, >> >> I'd like to enhance Mesos to support a first-class notion of "fault >> domains" -- i.e., identifying the "rack" and "region" (DC) where a >> Mesos agent or master is located. The goal is to enable two main >> features: >> >> (1) To make it easier to write "rack-aware" Mesos frameworks that are >> portable to different Mesos clusters. >> >> (2) To improve the experience of configuring Mesos with a set of >> masters and agents in one DC, and another pool of "remote" agents in a >> different DC. >> >> For more information, please see the design doc: >> >> >> https://docs.google.com/document/d/1gEugdkLRbBsqsiFv3urRPRNrHwUC-i1HwfFfHR_MvC8 >> >> I'd love any feedback, either directly on the Google doc or via email. >> >> Thanks, >> Neil
Re: Mesos (and Marathon) port mapping
Hello, sorry to insist, is the understanding below correct ? I'm really not sure. I understand that network/portmapping isolator is using disjoint port ranges to multiplex traffic into the same ports into containers but I'm not really sure if we're talking about ephemeral or non-ephemeral ports here neither if I correctly understand what kind of port is for what kind of use. About the direct mapping : what in the container is listening to the mapped port ? How ? Also what kind of this ephemeral vs non-ephermeral port is called a hostPort in Marathon ? Here's my initial understanding : Thanks. On 04/05/2017 12:23 PM, Thomas HUMMEL wrote: Ok, thanks. So if I wrap my head around all of this and try to answer my original question I come up with the following understanding : - servicePorts a a Marathon only concept - port mapping isolator is not compatible with docker containerizer - port mapping isolator is useful when you cannot afford one ip / container - port mapping isolator uses *ephemeral* ports to multiplex traffic into containers the *ephemeral* port range is divided into *disjoint* subsets of *contiguous* ports, each one affected to one container with a direct mapping hostport <-> containerPort. - non-ephemeral ports are affected to framework as a resource. So containers have *disjoint* sets of them but *not in a contiguous* range - the default port range offered by a slave is [31000-32000] : those are *non-ephemeral* ports and is not related to the activation or non activation of the port-mapping isolator - with docker containerizer in HOST mode, Marathon framework is offered such a port (in the [31000-32000] range and shows it in the GUI, but the app can bind to any different hostport *not in that range* (ex: 9090). In BRIDGE mode, the Marathon so-called 'hostPort' has to be in that range (why is that ?) I am right this time ? ;-) Thanks -- TH