Re: [ceph-users] combined ceph roles

2015-02-11 Thread Christopher Armstrong
Thanks for reporting, Nick - I've seen the same thing and thought I was
just crazy.

Chris

On Wed, Feb 11, 2015 at 6:48 AM, Nick Fisk n...@fisk.me.uk wrote:

 Hi David,



 I have had a few weird issues when shutting down a node, although I can
 replicate it by doing a “stop ceph-all” as well. It seems that OSD failure
 detection takes a lot longer when a monitor goes down at the same time,
 sometimes I have seen the whole cluster grind to a halt for several minutes
 before it works out whats happened.



 If I stop the either role and wait for it to be detected as failed and
 then do the next role, I don’t see the problem. So it might be something to
 keep in mind when doing maintenance.



 Nick



 *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
 Of *David Graham
 *Sent:* 10 February 2015 17:07
 *To:* ceph-us...@ceph.com
 *Subject:* [ceph-users] combined ceph roles



 Hello, I'm giving thought to a minimal footprint scenario with full
 redundancy. I realize it isn't ideal--and may impact overall performance
 --  but wondering if the below example would work, supported, or known to
 cause issue?

 Example, 3x hosts each running:
 -- OSD's
 -- Mon
 -- Client

 I thought I read a post a while back about Client+OSD on the same host
 possibly being an issue -- but i am having difficulty finding that
 reference.

 I would appreciate if anyone has insight into such a setup,

 thanks!







 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] combined ceph roles

2015-02-11 Thread Stephen Hindle
I saw a similar warning - turns out, its only an issue if your using
the kernel driver.  If your using VMs and access thru the library (eg
qemu/kvm) you should be ok...


On Tue, Feb 10, 2015 at 10:06 AM, David Graham xtn...@gmail.com wrote:
 Hello, I'm giving thought to a minimal footprint scenario with full
 redundancy. I realize it isn't ideal--and may impact overall performance --
 but wondering if the below example would work, supported, or known to cause
 issue?

 Example, 3x hosts each running:
 -- OSD's
 -- Mon
 -- Client


 I thought I read a post a while back about Client+OSD on the same host
 possibly being an issue -- but i am having difficulty finding that
 reference.

 I would appreciate if anyone has insight into such a setup,

 thanks!






 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
The information in this message may be confidential.  It is intended solely 
for
the addressee(s).  If you are not the intended recipient, any disclosure,
copying or distribution of the message, or any action or omission taken by 
you
in reliance on it, is prohibited and may be unlawful.  Please immediately
contact the sender if you have received this message in error.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] combined ceph roles

2015-02-11 Thread pixelfairy
i believe combining mon+osd, up to whatever magic number of monitors you
want, is common in small(ish) clusters. i also have a 3 node ceph cluster
at home and doing mon+osd, but not client. only rbd served to the vm hosts.
no problem even with my abuses (yanking disks out, shutting down nodes etc)
starting and stopping the whole cluster works fine too.

On Wed, Feb 11, 2015 at 9:07 AM, Christopher Armstrong ch...@opdemand.com
wrote:

 Thanks for reporting, Nick - I've seen the same thing and thought I was
 just crazy.

 Chris

 On Wed, Feb 11, 2015 at 6:48 AM, Nick Fisk n...@fisk.me.uk wrote:

 Hi David,



 I have had a few weird issues when shutting down a node, although I can
 replicate it by doing a “stop ceph-all” as well. It seems that OSD failure
 detection takes a lot longer when a monitor goes down at the same time,
 sometimes I have seen the whole cluster grind to a halt for several minutes
 before it works out whats happened.



 If I stop the either role and wait for it to be detected as failed and
 then do the next role, I don’t see the problem. So it might be something to
 keep in mind when doing maintenance.



 Nick



 *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
 Of *David Graham
 *Sent:* 10 February 2015 17:07
 *To:* ceph-us...@ceph.com
 *Subject:* [ceph-users] combined ceph roles



 Hello, I'm giving thought to a minimal footprint scenario with full
 redundancy. I realize it isn't ideal--and may impact overall performance
 --  but wondering if the below example would work, supported, or known to
 cause issue?

 Example, 3x hosts each running:
 -- OSD's
 -- Mon
 -- Client

 I thought I read a post a while back about Client+OSD on the same host
 possibly being an issue -- but i am having difficulty finding that
 reference.

 I would appreciate if anyone has insight into such a setup,

 thanks!







 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] combined ceph roles

2015-02-11 Thread Nick Fisk
Hi David,

 

I have had a few weird issues when shutting down a node, although I can 
replicate it by doing a “stop ceph-all” as well. It seems that OSD failure 
detection takes a lot longer when a monitor goes down at the same time, 
sometimes I have seen the whole cluster grind to a halt for several minutes 
before it works out whats happened.

 

If I stop the either role and wait for it to be detected as failed and then do 
the next role, I don’t see the problem. So it might be something to keep in 
mind when doing maintenance.

 

Nick

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of David 
Graham
Sent: 10 February 2015 17:07
To: ceph-us...@ceph.com
Subject: [ceph-users] combined ceph roles

 

Hello, I'm giving thought to a minimal footprint scenario with full redundancy. 
I realize it isn't ideal--and may impact overall performance --  but wondering 
if the below example would work, supported, or known to cause issue?

Example, 3x hosts each running:
-- OSD's
-- Mon
-- Client



I thought I read a post a while back about Client+OSD on the same host possibly 
being an issue -- but i am having difficulty finding that reference.

I would appreciate if anyone has insight into such a setup,

thanks!
















___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] combined ceph roles

2015-02-11 Thread André Gemünd
Hi All,

this would be interesting for us (at least temporarily). Do you think it would 
be better to run the mon as a VM on the OSD host or natively?

Greetings
André

- Am 11. Feb 2015 um 20:56 schrieb pixelfairy pixelfa...@gmail.com:

 i believe combining mon+osd, up to whatever magic number of monitors you want,
 is common in small(ish) clusters. i also have a 3 node ceph cluster at home 
 and
 doing mon+osd, but not client. only rbd served to the vm hosts. no problem 
 even
 with my abuses (yanking disks out, shutting down nodes etc) starting and
 stopping the whole cluster works fine too.
 
 On Wed, Feb 11, 2015 at 9:07 AM, Christopher Armstrong  ch...@opdemand.com 
 wrote:
 
 
 
 Thanks for reporting, Nick - I've seen the same thing and thought I was just
 crazy.
 
 Chris
 
 On Wed, Feb 11, 2015 at 6:48 AM, Nick Fisk  n...@fisk.me.uk  wrote:
 
 
 
 
 
 Hi David,
 
 
 
 I have had a few weird issues when shutting down a node, although I can
 replicate it by doing a “stop ceph-all” as well. It seems that OSD failure
 detection takes a lot longer when a monitor goes down at the same time,
 sometimes I have seen the whole cluster grind to a halt for several minutes
 before it works out whats happened.
 
 
 
 If I stop the either role and wait for it to be detected as failed and then do
 the next role, I don’t see the problem. So it might be something to keep in
 mind when doing maintenance.
 
 
 
 Nick
 
 
 
 From: ceph-users [mailto: ceph-users-boun...@lists.ceph.com ] On Behalf Of 
 David
 Graham
 Sent: 10 February 2015 17:07
 To: ceph-us...@ceph.com
 Subject: [ceph-users] combined ceph roles
 
 
 
 
 Hello, I'm giving thought to a minimal footprint scenario with full 
 redundancy.
 I realize it isn't ideal--and may impact overall performance -- but wondering
 if the below example would work, supported, or known to cause issue?
 
 
 Example, 3x hosts each running:
 -- OSD's
 -- Mon
 -- Client
 
 
 
 I thought I read a post a while back about Client+OSD on the same host 
 possibly
 being an issue -- but i am having difficulty finding that reference.
 
 
 I would appreciate if anyone has insight into such a setup,
 
 thanks!
 
 
 
 
 
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
André Gemünd
Fraunhofer-Institute for Algorithms and Scientific Computing
andre.gemu...@scai.fraunhofer.de
Tel: +49 2241 14-2193
/C=DE/O=Fraunhofer/OU=SCAI/OU=People/CN=Andre Gemuend
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] combined ceph roles

2015-02-10 Thread Lindsay Mathieson
Similar setup works well for me - 2 vm hosts, 1 Mon only mode. 6 osd's, 3 per 
vm host. Using rbd and cephfs

The more memory on your vm hosts, the better.

Lindsay Mathieson 

-Original Message-
From: David Graham xtn...@gmail.com
Sent: ‎11/‎02/‎2015 3:07 AM
To: ceph-us...@ceph.com ceph-us...@ceph.com
Subject: [ceph-users] combined ceph roles

Hello, I'm giving thought to a minimal footprint scenario with full redundancy. 
I realize it isn't ideal--and may impact overall performance --  but wondering 
if the below example would work, supported, or known to cause issue?


Example, 3x hosts each running:
-- OSD's
-- Mon
-- Client



I thought I read a post a while back about Client+OSD on the same host possibly 
being an issue -- but i am having difficulty finding that reference.


I would appreciate if anyone has insight into such a setup,

thanks!___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com