On 08/02/2012 12:02 PM, Mark Nelson wrote:
> On 8/2/12 11:13 AM, Tommi Virtanen wrote:
>> Sounds like bcache in writeback mode. Assumes all underlying block
>> devices are RAIDed, or losing one will mean losing data; that is, for
>> example RAID1(SSD+SSD) & RAID5(8*HDD).
>>
>> http://www.lessfs.com/wordpress/?p=776
>> -- 
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
> 
> Neat!  I'll try to play with it once the test hardware all makes it in.
> 
> Alex is also trying to bug the XFS guys (and Sage bugged the BTRFS guys)
> about ways to put metadata on SSD while keeping data on spinning disk.

I have the XFS patch.  It's based on pretty old kernel code.  I began
porting it forward yesterday but it was taking too long so I set
it aside.

I'll pick it up again soon to see if I can get through it.

                                        -Alex

> It sounds like there is a hack for XFS that would let us keep inodes in
> the lower portion of a volume up to some configurable boundary and then
> we could use lvm to assign that portion of the volume to an SSD.  The
> BTRFS guys have a SOC project in the works to separate out metadata onto
> another disk.
> 
> I think these kinds of things could really help our small request
> performance.
> 
> Mark
> 
> -- 
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to