Here are some recent benchmarks on bcache in Linux kernel 4.1

http://www.phoronix.com/scan.php?page=article&item=linux-41-bcache&num=1
On Fri, 26 Jun 2015 at 11:12, Nick Fisk <n...@fisk.me.uk> wrote:

> I think flashcache bombs out, I must admit I have tested that yet, but as
> I would only be running it in writecache mode, there is no requirement I
> can think of for it to keep on running gracefully.
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Dominik Zalewski
> *Sent:* 26 June 2015 10:54
> *To:* Nick Fisk
> *Cc:* ceph-users@lists.ceph.com
> *Subject:* Re: [ceph-users] Ceph and EnhanceIO cache
>
>
>
> Thanks for your reply.
>
>
>
> Do you know by any chance how flashcache handles SSD going offline?
>
>
>
> Here is an snip from enhanceio wiki page:
>
>
>
> Failure of an SSD device in read-only and write-through modes is
>
>               handled gracefully by allowing I/O to continue to/from the
>
>               source volume. An application may notice a drop in performance 
> but it
>
>               will not receive any I/O errors.
>
>
>
>               Failure of an SSD device in write-back mode obviously results 
> in the
>
>               loss of dirty blocks in the cache. To guard against this data 
> loss, two
>
>               SSD devices can be mirrored via RAID 1.
>
>
>
>               EnhanceIO identifies device failures based on error codes. 
> Depending on
>
>               whether the failure is likely to be intermittent or permanent, 
> it takes
>
>               the best suited action.
>
>
>
> Looking at mailing list and github commits, both flashcache and enhanceio had 
> not much going on since last  year.
>
>
>
> Dominik
>
>
>
> On Fri, Jun 26, 2015 at 10:28 AM, Nick Fisk <n...@fisk.me.uk> wrote:
>
> > -----Original Message-----
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Dominik Zalewski
> > Sent: 26 June 2015 09:59
> > To: ceph-users@lists.ceph.com
> > Subject: [ceph-users] Ceph and EnhanceIO cache
> >
> > Hi,
> >
> > I came across this blog post mentioning using EnhanceIO (fork of
> flashcache)
> > as cache for OSDs.
> >
>
> >
> http://xo4t.mj.am/link/xo4t/jgu89z1/1/cdr6ly7c3QuQHguxILKOvQ/aHR0cDovL3d3dy5zZWJhc3RpZW4taGFuLmZyL2Jsb2cvMjAxNC8xMC8wNi9jZXBoLWFuZC1lbmhhbmNlaW8v
> >
> >
> http://xo4t.mj.am/link/xo4t/jgu89z1/2/jneqhzCbW8BNS2K_vOBBVw/aHR0cHM6Ly9naXRodWIuY29tL3N0ZWMtaW5jL0VuaGFuY2VJTw
>
>
> >
> > I'm planning to test it with 5x 1TB HGST Travelstar 7k1000 2.5inch OSDs
> and
> > using 256GB Transcend SSD as enhanceio cache.
> >
> > I'm wondering if anyone is using EnhanceIO or others like bcache,
> dm-cache
> > with Ceph in production and what is your experience/results.
>
> Not using in production, but have been testing all of the above both
> caching the OSD and RBD's.
>
> If your RBD's are being used in scenarios where small sync writes are
> important (iscsi,database's) then caching the RBD's is almost essential. My
> findings:-
>
> FlashCache - Probably the best of the bunch. Has sequential override and
> hopefully facebook will continue to maintain it
> EnhanceIO - Nice that you can hot add the cache, but is likely to no
> longer be maintained, so risky for production
> DMCache - Well maintained, but biggest problem is that it only caches
> writes for blocks that are already in cache
> Bcache - Didn't really spend much time looking at this. Looks as if
> development activity is dying down and potential stability issues
> DM-WriteBoost - Great performance, really suits RBD requirements.
> Unfortunately the ram buffering part seems to not play safe with iSCSI use.
>
> With something like flashcache on a RBD I can max out the SSD with small
> sequential write IO's and it then passes them to the RBD in nice large
> IO's. Bursty random writes also benefit.
>
> Caching the OSD's, or more specifically a small section where the levelDB
> lives can greatly improve small block write performance. Flashcache is best
> for this as you can limit the sequential cutoff to the levelDB transaction
> size. DMcache is potentially an option as well. You can probably halve OSD
> latency by doing this.
>
> >
> > Thanks
> >
> > Dominik
>
>
>
>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to