On 04/23/2014 09:35 PM, Craig Lewis wrote:


On 4/23/14 12:33 , Dyweni - Ceph-Users wrote:
Hi,

I'd like to know what happens to a cluster with one monitor while that
one monitor process is being restarted.

For example, if I have an RBD image mounted and in use (actively
reading/writing) when I restart that monitor, will all those reading
and writing operations block until the monitor has finished restarting
and is operational again?
I haven't tested RDB, but I did test RGW.   RGW continued working fine
for the ~10 minutes I had the monitors down.  I didn't try to restart
anything else though.  I believe that if I attempted to restart radosgw
while the monitors are down, it would block until the monitors are up.

With absolutely no idea what I'm talking about, I would guess that your
already mounted RBD would continue to work, and any new mounts would block.

That's basically it, for about 5 or 10 minutes. Then a timer will go off and clients will retry to contact the monitor, in which case its absence will be noticed and the clients will block.

I assume that very bad things would happen if I lost a node or disk
while the monitors were down.  I have no plans to test this.

Not necessarily. OSDs keep heartbeats to track other live OSDs. If a server goes down or a disk fails, other OSDs will notice that and report the failed OSD to the mons. While I'm not sure whether the rest of the OSDs will block while trying to reach the down monitor, my guess is that it should not be relevant as the client, noticing that it's data has not gone through to the osd, or from the osd, will try to get a new map from the mons and block. However, this is purely speculation on my part.

The monitor processes aren't very heavy though.  Is there any reason you
can't run more than one?

Or even at least three? :)

  -Joao


--
Joao Eduardo Luis
Software Engineer | http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to