On Wed, May 23, 2012 at 12:47 PM, Jerker Nyberg <jer...@update.uu.se> wrote:
> On Tue, 22 May 2012, Gregory Farnum wrote:
>
>> Direct users of the RADOS object store (i.e., librados) can do all kinds
>> of things with the integrity guarantee options. But I don't believe there's
>> currently a way to make the filesystem do so яя among other things, you're
>> running through the page cache and other writeback caches anyway, so it
>> generally wouldn't be useful except when running an fsync or similar. And at
>> that point you probably really want to not be lying to the application
>> that's asking for it.
>
>
> I am comparing with in-memory databases. If replication and failovers are
> used, couldn't in-memory in some cases be good enough? And faster.
>
>
>> do you have a use case on Ceph?
>
>
> Currently of interest:
>
>  * Scratch file system for HPC. (kernel client)
>  * Scratch file system for research groups. (SMB, NFS, SSH)
>  * Backend for simple disk backup. (SSH/rsync, AFP, BackupPC)
>  * Metropolitan cluster.
>  * VDI backend. KVM with RBD.
Hmm. Sounds to me like scratch filesystems would get a lot out of not
having to hit disk on the commit, but not much out of having separate
caching locations versus just letting the OSD page cache handle it. :)
The others, I don't really see collaborative caching helping much either.

So basically it sounds like you want to be able to toggle off Ceph's
data safety requirements. That would have to be done in the clients;
it wouldn't even be hard in ceph-fuse (although I'm not sure about the
kernel client). It's probably a pretty easy way to jump into the code
base.... :)
Anyway, make a bug for it in the tracker (I don't think one exists
yet, though I could be wrong) and someday when we start work on the
filesystem again we should be able to get to it. :)
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to