Hi Vaibhav,

Just go through this blog; it seems the two slave using different work
directory (/tmp/mesos1 and /tmp/mesos2)
Regarding your log, your work directory is
--work_dir="/home/introom/work/mesos/work_dir";
did you change the work directory of the both slaves?

And if you have updated slave's configuration, e.g. resources, you'll also
get similar issue when restarting. There's a JIRA (MESOS-1739) to support
re-configuration when re-starting slave.


----
Da (Klaus), Ma (马达) | PMP® | Advisory Software Engineer
Platform Symphony/DCOS Development & Support, STG, IBM GCG
+86-10-8245 4084 | klaus1982...@gmail.com | http://k82.me

On Wed, Jan 13, 2016 at 4:04 AM, Vaibhav Khanduja <vaibhavkhand...@gmail.com
> wrote:

> Tried something similar here,
> http://veekeay.blogspot.com/2016/01/fine-grained-resource-management-using.html
>
> On Tue, Jan 12, 2016 at 5:19 AM, Klaus Ma <klaus1982...@gmail.com> wrote:
>
>> The default behaviour it to report all cpu/mem; but there's a
>> module/plugin to report resources on demand, please refer to MESOS-3366.
>> And as haosdent said, you can also use container for that. Furthermore, I
>> you want to bound task on special CPU; framework developer need to do that.
>>
>> ----
>> Da (Klaus), Ma (马达) | PMP® | Advisory Software Engineer
>> Platform Symphony/DCOS Development & Support, STG, IBM GCG
>> +86-10-8245 4084 | klaus1982...@gmail.com | http://k82.me
>>
>> On Tue, Jan 12, 2016 at 9:10 PM, Du, Fan <fan...@intel.com> wrote:
>>
>>> thanks for the explanation.
>>> what I mean is how to prevent two slaves from using the same cpu at the
>>> same time?
>>>
>>> Apparently if we launch two slave instances, they should have
>>> distinct(non-overlapped) cpu/mem resources
>>> to report back to master. How does current code to archive this
>>> functionality?
>>>
>>>
>>> On 2016/1/12 21:03, Klaus Ma wrote:
>>>
>>>> The resources of slave can be defined by "--resources"; but can not
>>>> define 50% cpu by default. There's a module in Agent to report how many
>>>> resources can be used by current slave.
>>>>
>>>> For this case, "--resources" is enough for him :).
>>>>
>>>> ----
>>>> Da (Klaus), Ma (马达) | PMP® | Advisory Software Engineer
>>>> Platform Symphony/DCOS Development & Support, STG, IBM GCG
>>>> +86-10-8245 4084 | klaus1982...@gmail.com
>>>> <mailto:klaus1982...@gmail.com> | http://k82.me
>>>>
>>>> On Tue, Jan 12, 2016 at 9:00 PM, Du, Fan <fan...@intel.com
>>>> <mailto:fan...@intel.com>> wrote:
>>>>
>>>>     How to make slave_a to use first half of cpu/memory and slave_b use
>>>>     the rest of it?
>>>>
>>>>
>>>>     On 2016/1/12 20:54, haosdent wrote:
>>>>
>>>>         Yes, need use different work_dir and port.
>>>>
>>>>         On Tue, Jan 12, 2016 at 8:42 PM, Du, Fan <fan...@intel.com
>>>>         <mailto:fan...@intel.com>
>>>>         <mailto:fan...@intel.com <mailto:fan...@intel.com>>> wrote:
>>>>
>>>>              Just my 2 cents.
>>>>
>>>>              I guess the spew is caused by the same work_dir.
>>>>              Even with two different work_dir, how does cpu/mem
>>>>         resources are
>>>>              partitioned for two slave instances?
>>>>              I'm not aware how current resources parsing logic support
>>>> this(
>>>>              probably not).
>>>>              but why not use slave docker image to do the resource
>>>>         partition?
>>>>              that's what docker meant to be here.
>>>>
>>>>              On 2016/1/12 19:58, Shiyao Ma wrote:
>>>>
>>>>                  Hi,
>>>>
>>>>                  When trying starting two slaves on a single host, I
>>>>         encoutered
>>>>                  with the
>>>>                  following error:
>>>>
>>>>                  paste: http://sprunge.us/bLKb
>>>>
>>>>                  Apparently, the second slave was *mis-understood* as
>>>> the
>>>>                  recovery of the
>>>>                  first.
>>>>
>>>>                  The slaves are configured identically other than the
>>>> ports.
>>>>
>>>>
>>>>                  Regards.
>>>>
>>>>                  --
>>>>
>>>>                  吾輩は猫である。ホームーページはhttps://introo.me
>>>>                  <http://introo.me>。
>>>>
>>>>
>>>>
>>>>
>>>>         --
>>>>         Best Regards,
>>>>         Haosdent Huang
>>>>
>>>>
>>>>
>>
>

Reply via email to