Hi Arne and James,

Ah, I misunderstood James' suggestion. Using bcache w/ SSDs can be another 
viable alternative to SSD journal partitions indeed.
I think ultimately I will need to test the options since very few people have 
experience with cache tiering or bcache.

Thanks,
Benjamin

From: Arne Wiebalck [mailto:arne.wieba...@cern.ch]
Sent: Tuesday, July 08, 2014 11:27 AM
To: Somhegyi Benjamin
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Using large SSD cache tier instead of SSD journals?

Hi Benjamin,

Unless I misunderstood, I think the suggestion was to use bcache devices on the 
OSDs
(not on the clients), so what you use it for in the end doesn't really matter.

The setup of bcache devices is pretty similar to a mkfs and once set up, bcache 
devices
come up and can be mounted as any other device.

Cheers,
 Arne

--
Arne Wiebalck
CERN IT

On 08 Jul 2014, at 11:01, Somhegyi Benjamin 
<somhegyi.benja...@wigner.mta.hu<mailto:somhegyi.benja...@wigner.mta.hu>> wrote:


Hi James,

Yes, I've checked bcache, but as far as I can tell you need to manually 
configure and register the backing devices and attach them to the cache device, 
which is not really suitable to dynamic environment (like RBD devices for cloud 
VMs).

Benjamin



-----Original Message-----
From: James Harper [mailto:ja...@ejbdigital.com.au]
Sent: Tuesday, July 08, 2014 10:17 AM
To: Somhegyi Benjamin; 
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: RE: Using large SSD cache tier instead of SSD journals?

Have you considered bcache? It's in the kernel since 3.10 I think.

It would be interesting to see comparisons between no ssd, journal on
ssd, and bcache with ssd (with journal on same fs as osd)

James
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to