Ceph uses CRUSH (http://ceph.com/docs/master/rados/operations/crush-map/) to 
determine object placement.  The default generated crush maps are sane, in that 
they will put replicas in placement groups into separate failure domains.  You 
do not need to worry about this simple failure case, but you should consider 
the network and disk i/o consequences of re-replicating large amounts of data.

Sean
________________________________________
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of LaBarre, 
James  (CTR)      A6IT [james.laba...@cigna.com]
Sent: Thursday, August 21, 2014 9:17 AM
To: ceph-us...@ceph.com
Subject: [ceph-users] Question on OSD node failure recovery

I understand the concept with Ceph being able to recover from the failure of an 
OSD (presumably with a single OSD being on a single disk), but I’m wondering 
what the scenario is if an OSD server node containing  multiple disks should 
fail.  Presuming you have a server containing 8-10 disks, your duplicated 
placement groups could end up on the same system.  From diagrams I’ve seen they 
show duplicates going to separate nodes, but is this in fact how it handles it?

------------------------------------------------------------------------------
CONFIDENTIALITY NOTICE: If you have received this email in error,
please immediately notify the sender by e-mail at the address shown.
This email transmission may contain confidential information.  This
information is intended only for the use of the individual(s) or entity to
whom it is intended even if addressed incorrectly.  Please delete it from
your files if you are not the intended recipient.  Thank you for your
compliance.  Copyright (c) 2014 Cigna
==============================================================================
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to