Hi Robert,

It seems I have to give up on this goal for now but wanted to be sure I wasn't 
missing something obvious.

>If you can survive missing that data you are probably better of running fully 
>from ephemeral storage in the first place.

What and lose the entire ephemeral disk since the VM was created? Am I missing 
something here or is there an automated way of syncing ephemeral disks from 
time to time with a ceph back end?

Thanks

Daniel

-----Original Message-----
From: Van Leeuwen, Robert [mailto:rovanleeu...@ebay.com] 
Sent: 16 March 2016 10:15
To: Daniel Niasoff <dan...@redactus.co.uk>; Jason Dillaman <dilla...@redhat.com>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Local SSD cache for ceph on each compute node.

>
>My understanding of how a writeback cache should work is that it should only 
>take a few seconds for writes to be streamed onto the network and is focussed 
>on resolving the speed issue of small sync writes. The writes would be bundled 
>into larger writes that are not time sensitive.
>
>So there is potential for a few seconds data loss but compared to the current 
>trend of using ephemeral storage to solve this issue, it's a major improvement.

It think is a bit worse then just a few seconds of data:
As mentioned in the blueprint for ceph you would need some kind or ordered 
write-back cache that maintains checkpoints internally.

I am not that familiar with the internals of dm-cache but I do not think it 
guarantees any write order.
E.g. By default it will bypass the cache for sequential IO.

So I think it is very likely the “few seconds of data loss" in this case means 
the filesystem is corrupt and you could lose the whole thing.
At the very least you will need to run fsck on it and hope it can sort out all 
of the errors with minimal data loss.


So, for me, it seems conflicting to me to use persistent storage and then 
hoping your volumes survive a power outage.

If you can survive missing that data you are probably better of running fully 
from ephemeral storage in the first place.

Cheers,
Robert van Leeuwen

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to