I tried putting Flashcache on my spindle OSDs using an Intel SSL and it
works great.  This is getting me read and write SSD caching instead of just
write performance on the journal.  It should also allow me to protect the
OSD journal on the same drive as the OSD data and still get benefits of SSD
caching for writes.


On Mon, Oct 7, 2013 at 11:43 AM, Jason Villalta <ja...@rubixnet.com> wrote:

> I found this without much effort.
>
> http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/
>
>
> On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta <ja...@rubixnet.com>wrote:
>
>> I also would be interested in how bcache or flashcache would integrate.
>>
>>
>> On Mon, Oct 7, 2013 at 11:34 AM, Martin Catudal <mcatu...@metanor.ca>wrote:
>>
>>> Thank's Mike,
>>>      Kyle Bader suggest me also to use my large SSD (900 GB) as cache
>>> drive using "bcache" or "flashcache".
>>> Since I have already plan to use SSD for my journal, I would certainly
>>> use also SSD as cache drive in addition.
>>>
>>> I will have to read documentation about "bcache" and his integration
>>> with Ceph.
>>>
>>> Martin
>>>
>>> Martin Catudal
>>> Responsable TIC
>>> Ressources Metanor Inc
>>> Ligne directe: (819) 218-2708
>>>
>>> Le 2013-10-07 11:25, Mike Lowe a écrit :
>>> > Based on my experience I think you are grossly underestimating the
>>> expense and frequency of flushes issued from your vm's.  This will be
>>> especially bad if you aren't using the async flush from qemu >= 1.4.2 as
>>> the vm is suspended while qemu waits for the flush to finish.  I think your
>>> best course of action until the caching pool work is completed (I think I
>>> remember correctly that this is currently in development) is to either use
>>> the ssd's as large caches with bcache or to use them for journal devices.
>>>  I'm sure there are some other more informed opinions out there on the best
>>> use of ssd's in a ceph cluster and hopefully they will chime in.
>>> >
>>> > On Oct 6, 2013, at 9:23 PM, Martin Catudal <mcatu...@metanor.ca>
>>> wrote:
>>> >
>>> >> Hi Guys,
>>> >>      I read all Ceph documentation more than twice. I'm now very
>>> >> comfortable with all the aspect of Ceph except for the strategy of
>>> using
>>> >> my SSD and HDD.
>>> >>
>>> >> Here is my reflexion
>>> >>
>>> >> I've two approach in my understanding about use fast SSD (900 GB) for
>>> my
>>> >> primary storage and huge but slower HDD (4 TB) for replicas.
>>> >>
>>> >> FIRST APPROACH
>>> >> 1. I can use PG with cache write enable as my primary storage that's
>>> >> goes on my SSD and let replicas goes on my 7200 RPM.
>>> >>       With the cache write enable, I will gain performance for my VM
>>> >> user machine in VDI environment since Ceph client will not have to
>>> wait
>>> >> for the replicas write confirmation on the slower HDD.
>>> >>
>>> >> SECOND APPROACH
>>> >> 2. Use pools hierarchies and let's have one pool for the SSD as
>>> primary
>>> >> and lets the replicas goes to a second pool name platter for HDD
>>> >> replication.
>>> >>      As explain in the Ceph documentation
>>> >>      rule ssd-primary {
>>> >>                ruleset 4
>>> >>                type replicated
>>> >>                min_size 5
>>> >>                max_size 10
>>> >>                step take ssd
>>> >>                step chooseleaf firstn 1 type host
>>> >>                step emit
>>> >>                step take platter
>>> >>                step chooseleaf firstn -1 type host
>>> >>                step emit
>>> >>        }
>>> >>
>>> >> At this point, I could not figure out what approach could have the
>>> most
>>> >> advantage.
>>> >>
>>> >> Your point of view would definitely help me.
>>> >>
>>> >> Sincerely,
>>> >> Martin
>>> >>
>>> >> --
>>> >> Martin Catudal
>>> >> Responsable TIC
>>> >> Ressources Metanor Inc
>>> >> Ligne directe: (819) 218-2708
>>> >> _______________________________________________
>>> >> ceph-users mailing list
>>> >> ceph-users@lists.ceph.com
>>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>>
>> --
>> --
>> *Jason Villalta*
>> Co-founder
>> [image: Inline image 1]
>> 800.799.4407x1230 | www.RubixTechnology.com<http://www.rubixtechnology.com/>
>>
>
>
>
> --
> --
> *Jason Villalta*
> Co-founder
> [image: Inline image 1]
> 800.799.4407x1230 | www.RubixTechnology.com<http://www.rubixtechnology.com/>
>



-- 
-- 
*Jason Villalta*
Co-founder
[image: Inline image 1]
800.799.4407x1230 | www.RubixTechnology.com<http://www.rubixtechnology.com/>

<<EmailLogo.png>>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to