> -----Original Message-----
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Sage Weil
> Sent: Monday, October 19, 2015 9:49 PM
> 
> The current design is based on two simple ideas:
> 
>  1) a key/value interface is better way to manage all of our internal metadata
> (object metadata, attrs, layout, collection membership, write-ahead logging,
> overlay data, etc.)
> 
>  2) a file system is well suited for storage object data (as files).
> 
> So far 1 is working out well, but I'm questioning the wisdom of #2.  A few
> things:
> 
> [..]
> 
> But what's the alternative?  My thought is to just bite the bullet and consume
> a raw block device directly.  Write an allocator, hopefully keep it pretty
> simple, and manage it in kv store along with all of our other metadata.

This is pretty much reinventing the file system, but...

I actually did something similar for my personal project (e-mail client), 
moving from maildir-like structure (each message was one file) to something 
resembling mbox (one large file per mail folder, containing pre-decoded 
structures for fast and easy access). And this worked out really well, 
especially with searches and bulk processing (filtering by body contents, and 
so on). I don't remember exact figures, but the performance benefit was in at 
least order of magnitude. If huge amounts of small-to-medium (0-128k) objects 
are the target, this is the way to go.

The most serious issue was fragmentation. Since I actually put my box files on 
top of actual FS (here: NTFS), low-level fragmentation was not a problem (each 
message was read and written in one fread/fwrite anyway). High-level 
fragmentation was an issue - each time a message was moved away, it still 
occupied space. To combat this, I wrote a space reclaimer that moved messages 
within box file (consolidated them) and maintained a bitmap of 4k free spaces, 
so I could re-use unused space without taking too much time iterating through 
messages and without calling reclaimer. Also, reclaimer was smart enough to not 
move messages one-by-one, but instead it loaded up to n messages in at most n 
reads (in common case it was less than that) and wrote them in one call and do 
its work until some space was actually reclaimed, instead of doing full garbage 
collection. Machinery was also aware of fact that messages were (mostly) 
appended to the end of box, so instead of blindly doing that, it moved back 
end-of-box pointer once messages at the end of box were deleted.
Other issue was reliability. Obviously, I had an option of secondary temp file, 
but still, everything above is doable without that.
Benefits included reduced requirements for metadata storage. Instead of 
generating unique ID (filename) for each message (apparently, message-id header 
is not reliable in that regard), I just stored offset and size (8+4 bytes per 
message), which, for 300 thousand of messages calculated to just 3,5MB of 
memory and could be kept in RAM. I/O performance has also improved due to less 
random access pattern (messages were physically close to each other instead of 
being scattered all over the drive)
For Ceph, benefits could be even greater. I can imagine faster deep scrubs that 
are way more efficient on spinning drives; efficient object storage (no 
per-object fragmentation and less disk-intensive object readahead, maybe with 
better support from hardware); possibly more reliability (when we fsync, we 
actually fsync - we don't get cheated by underlying FS), and we could get it 
optimized for particular devices (for example, most SSDs suck like vacuum on 
I/Os below 4k, so we could enforce I/Os of at least 4k).

Just my 0.02$.

With best regards / Pozdrawiam
Piotr Dałek


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to