i believe combining mon+osd, up to whatever magic number of monitors you
want, is common in small(ish) clusters. i also have a 3 node ceph cluster
at home and doing mon+osd, but not client. only rbd served to the vm hosts.
no problem even with my abuses (yanking disks out, shutting down nodes etc)
starting and stopping the whole cluster works fine too.

On Wed, Feb 11, 2015 at 9:07 AM, Christopher Armstrong <ch...@opdemand.com>
wrote:

> Thanks for reporting, Nick - I've seen the same thing and thought I was
> just crazy.
>
> Chris
>
> On Wed, Feb 11, 2015 at 6:48 AM, Nick Fisk <n...@fisk.me.uk> wrote:
>
>> Hi David,
>>
>>
>>
>> I have had a few weird issues when shutting down a node, although I can
>> replicate it by doing a “stop ceph-all” as well. It seems that OSD failure
>> detection takes a lot longer when a monitor goes down at the same time,
>> sometimes I have seen the whole cluster grind to a halt for several minutes
>> before it works out whats happened.
>>
>>
>>
>> If I stop the either role and wait for it to be detected as failed and
>> then do the next role, I don’t see the problem. So it might be something to
>> keep in mind when doing maintenance.
>>
>>
>>
>> Nick
>>
>>
>>
>> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
>> Of *David Graham
>> *Sent:* 10 February 2015 17:07
>> *To:* ceph-us...@ceph.com
>> *Subject:* [ceph-users] combined ceph roles
>>
>>
>>
>> Hello, I'm giving thought to a minimal footprint scenario with full
>> redundancy. I realize it isn't ideal--and may impact overall performance
>> --  but wondering if the below example would work, supported, or known to
>> cause issue?
>>
>> Example, 3x hosts each running:
>> -- OSD's
>> -- Mon
>> -- Client
>>
>> I thought I read a post a while back about Client+OSD on the same host
>> possibly being an issue -- but i am having difficulty finding that
>> reference.
>>
>> I would appreciate if anyone has insight into such a setup,
>>
>> thanks!
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to