*   We are trying to assess if we are going to see a data loss if an SSD that 
is hosting journals for few OSDs crashes. In our configuration, each SSD is 
partitioned into 5 chunks and each chunk is mapped as a journal drive for one 
OSD. What I understand from the Ceph documentation: "Consistency: Ceph OSD 
Daemons require a filesystem interface that guarantees atomic compound 
operations. Ceph OSD Daemons write a description of the operation to the 
journal and apply the operation to the filesystem. This enables atomic updates 
to an object (for example, placement group metadata). Every few seconds-between 
filestore max sync interval and filestore min sync interval-the Ceph OSD Daemon 
stops writes and synchronizes the journal with the filesystem, allowing Ceph 
OSD Daemons to trim operations from the journal and reuse the space. On 
failure, Ceph OSD Daemons replay the journal starting after the last 
synchronization operation." So, my question is what happens if an SSD fails - 
am I going to lose all the data that has not been written/synchronized to OSD?  
In my case, am I going to lose data for all the 5 OSDs which can be bad?  This 
is of concern to us. What are the options to prevent any data loss at all?  Is 
it better to have the journals on the same hard drive, i.e., to have one 
journal per OSD and host it on the same hard drive?  Of course, performance 
will not be as good as having an SSD for OSD journal. In this case, I am 
thinking I will not lose data as there are secondary OSDs where data is 
replicated (we are using triple replication).  Any thoughts?  What other 
solutions people have adopted for data reliability and consistency to address 
the case I am mentioning?



Legal Disclaimer:
The information contained in this message may be privileged and confidential. 
It is intended to be read only by the individual or entity to whom it is 
addressed or by their designee. If the reader of this message is not the 
intended recipient, you are on notice that any distribution of this message, in 
any form, is strictly prohibited. If you have received this message in error, 
please immediately notify the sender and delete or destroy any copy of this 
message!
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to