Am 18.08.2015 um 15:43 schrieb Campbell, Bill:
> Hey Stefan,
> Are you using your Ceph cluster for virtualization storage?
Yes

>  Is dm-writeboost configured on the OSD nodes themselves?
Yes

Stefan

> 
> ------------------------------------------------------------------------
> *From: *"Stefan Priebe - Profihost AG" <s.pri...@profihost.ag>
> *To: *"Mark Nelson" <mnel...@redhat.com>, ceph-users@lists.ceph.com
> *Sent: *Tuesday, August 18, 2015 7:36:10 AM
> *Subject: *Re: [ceph-users] any recommendation of using EnhanceIO?
> 
> We're using an extra caching layer for ceph since the beginning for our
> older ceph deployments. All new deployments go with full SSDs.
> 
> I've tested so far:
> - EnhanceIO
> - Flashcache
> - Bcache
> - dm-cache
> - dm-writeboost
> 
> The best working solution was and is bcache except for it's buggy code.
> The current code in 4.2-rc7 vanilla kernel still contains bugs. f.e.
> discards result in crashed FS after reboots and so on. But it's still
> the fastest for ceph.
> 
> The 2nd best solution which we already use in production is
> dm-writeboost (https://github.com/akiradeveloper/dm-writeboost).
> 
> Everything else is too slow.
> 
> Stefan
> Am 18.08.2015 um 13:33 schrieb Mark Nelson:
>> Hi Jan,
>>
>> Out of curiosity did you ever try dm-cache?  I've been meaning to give
>> it a spin but haven't had the spare cycles.
>>
>> Mark
>>
>> On 08/18/2015 04:00 AM, Jan Schermer wrote:
>>> I already evaluated EnhanceIO in combination with CentOS 6 (and
>>> backported 3.10 and 4.0 kernel-lt if I remember correctly).
>>> It worked fine during benchmarks and stress tests, but once we run DB2
>>> on it it panicked within minutes and took all the data with it (almost
>>> literally - files that werent touched, like OS binaries were b0rked
>>> and the filesystem was unsalvageable).
>>> If you disregard this warning - the performance gains weren't that
>>> great either, at least in a VM. It had problems when flushing to disk
>>> after reaching dirty watermark and the block size has some
>>> not-well-documented implications (not sure now, but I think it only
>>> cached IO _larger_than the block size, so if your database keeps
>>> incrementing an XX-byte counter it will go straight to disk).
>>>
>>> Flashcache doesn't respect barriers (or does it now?) - if that's ok
>>> for you than go for it, it should be stable and I used it in the past
>>> in production without problems.
>>>
>>> bcache seemed to work fine, but I needed to
>>> a) use it for root
>>> b) disable and enable it on the fly (doh)
>>> c) make it non-persisent (flush it) before reboot - not sure if that
>>> was possible either.
>>> d) all that in a customer's VM, and that customer didn't have a strong
>>> technical background to be able to fiddle with it...
>>> So I haven't tested it heavily.
>>>
>>> Bcache should be the obvious choice if you are in control of the
>>> environment. At least you can cry on LKML's shoulder when you lose
>>> data :-)
>>>
>>> Jan
>>>
>>>
>>>> On 18 Aug 2015, at 01:49, Alex Gorbachev <a...@iss-integration.com> wrote:
>>>>
>>>> What about https://github.com/Frontier314/EnhanceIO?  Last commit 2
>>>> months ago, but no external contributors :(
>>>>
>>>> The nice thing about EnhanceIO is there is no need to change device
>>>> name, unlike bcache, flashcache etc.
>>>>
>>>> Best regards,
>>>> Alex
>>>>
>>>> On Thu, Jul 23, 2015 at 11:02 AM, Daniel Gryniewicz <d...@redhat.com>
>>>> wrote:
>>>>> I did some (non-ceph) work on these, and concluded that bcache was
>>>>> the best
>>>>> supported, most stable, and fastest.  This was ~1 year ago, to take
>>>>> it with
>>>>> a grain of salt, but that's what I would recommend.
>>>>>
>>>>> Daniel
>>>>>
>>>>>
>>>>> ________________________________
>>>>> From: "Dominik Zalewski" <dzalew...@optlink.net>
>>>>> To: "German Anders" <gand...@despegar.com>
>>>>> Cc: "ceph-users" <ceph-users@lists.ceph.com>
>>>>> Sent: Wednesday, July 1, 2015 5:28:10 PM
>>>>> Subject: Re: [ceph-users] any recommendation of using EnhanceIO?
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>> I’ve asked same question last weeks or so (just search the mailing list
>>>>> archives for EnhanceIO :) and got some interesting answers.
>>>>>
>>>>> Looks like the project is pretty much dead since it was bought out
>>>>> by HGST.
>>>>> Even their website has some broken links in regards to EnhanceIO
>>>>>
>>>>> I’m keen to try flashcache or bcache (its been in the mainline
>>>>> kernel for
>>>>> some time)
>>>>>
>>>>> Dominik
>>>>>
>>>>> On 1 Jul 2015, at 21:13, German Anders <gand...@despegar.com> wrote:
>>>>>
>>>>> Hi cephers,
>>>>>
>>>>>    Is anyone out there that implement enhanceIO in a production
>>>>> environment?
>>>>> any recommendation? any perf output to share with the diff between
>>>>> using it
>>>>> and not?
>>>>>
>>>>> Thanks in advance,
>>>>>
>>>>> German
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@lists.ceph.com
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@lists.ceph.com
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@lists.ceph.com
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> *NOTICE: Protect the information in this message in accordance with the
> company's security policies. If you received this message in error,
> immediately notify the sender and destroy all copies.*
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to