Re: [ceph-users] Ceph and EnhanceIO cache

2015-06-26 Thread Nick Fisk
I assume this was caching RBD’s?

 

I have found you have to adjust the cache flushing parameters to allow 
flashcache to take full advantage of the high queue depths Ceph excels at. I 
guess it depends on workload to an extent as well. It’s probably also quite 
easily to max out the SDD used for the caching, something like the hsgt SSD800 
range would be ideal.

 

From: Andrei Mikhailovsky [mailto:and...@arhont.com] 
Sent: 26 June 2015 12:35
To: Nick Fisk
Cc: Dominik Zalewski; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph and EnhanceIO cache

 

Hi Nick,

 

I've played with Flashcache and EnhanceIO, but I've decided not to use it for 
production in the end. The reason was that using both has increased the amount 
of slow requests that I had on the cluster and I have also noticed somewhat 
higher level of iowait on the vms. At that time, I didn't have much time to 
investigate the slow requests issue and I wasn't sure what exactly is causing 
them. All I can say is that after disabling the caching the slow requests have 
stopped.

 

Perhaps others could share if they had any issues.

 

THanks

 

  _  

From: "Nick Fisk" 
To: "Dominik Zalewski" 
Cc: ceph-users@lists.ceph.com
Sent: Friday, 26 June, 2015 11:12:25 AM
Subject: Re: [ceph-users] Ceph and EnhanceIO cache

 

I think flashcache bombs out, I must admit I have tested that yet, but as I 
would only be running it in writecache mode, there is no requirement I can 
think of for it to keep on running gracefully.

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
Dominik Zalewski
Sent: 26 June 2015 10:54
To: Nick Fisk
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph and EnhanceIO cache

 

Thanks for your reply. 

 

Do you know by any chance how flashcache handles SSD going offline?

 

Here is an snip from enhanceio wiki page:

 

Failure of an SSD device in read-only and write-through modes is

  handled gracefully by allowing I/O to continue to/from the
  source volume. An application may notice a drop in performance 
but it
  will not receive any I/O errors.
 
  Failure of an SSD device in write-back mode obviously results in 
the
  loss of dirty blocks in the cache. To guard against this data 
loss, two
  SSD devices can be mirrored via RAID 1.
 
  EnhanceIO identifies device failures based on error codes. 
Depending on
  whether the failure is likely to be intermittent or permanent, it 
takes
  the best suited action.
 
Looking at mailing list and github commits, both flashcache and enhanceio had 
not much going on since last  year.
 
Dominik

 

 

On Fri, Jun 26, 2015 at 10:28 AM, Nick Fisk mailto:n...@fisk.me.uk> > wrote:

> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com 
> <mailto:ceph-users-boun...@lists.ceph.com> ] On Behalf Of
> Dominik Zalewski
> Sent: 26 June 2015 09:59
> To: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> 
> Subject: [ceph-users] Ceph and EnhanceIO cache
>
> Hi,
>
> I came across this blog post mentioning using EnhanceIO (fork of flashcache)
> as cache for OSDs.
>
> http://xo4t.mj.am/link/xo4t/jgu895v/1/DnECCniu-HWfTojpLN1IqA/aHR0cDovL3d3dy5zZWJhc3RpZW4taGFuLmZyL2Jsb2cvMjAxNC8xMC8wNi9jZXBoLWFuZC1lbmhhbmNlaW8v
>
> http://xo4t.mj.am/link/xo4t/jgu895v/2/FTxs29ShRIqNOekTAo2wKw/aHR0cHM6Ly9naXRodWIuY29tL3N0ZWMtaW5jL0VuaGFuY2VJTw
>
> I'm planning to test it with 5x 1TB HGST Travelstar 7k1000 2.5inch OSDs and
> using 256GB Transcend SSD as enhanceio cache.
>
> I'm wondering if anyone is using EnhanceIO or others like bcache, dm-cache
> with Ceph in production and what is your experience/results.

Not using in production, but have been testing all of the above both caching 
the OSD and RBD's.

 

If your RBD's are being used in scenarios where small sync writes are important 
(iscsi,database's) then caching the RBD's is almost essential. My findings:-

 

FlashCache - Probably the best of the bunch. Has sequential override and 
hopefully facebook will continue to maintain it
EnhanceIO - Nice that you can hot add the cache, but is likely to no longer be 
maintained, so risky for production
DMCache - Well maintained, but biggest problem is that it only caches writes 
for blocks that are already in cache
Bcache - Didn't really spend much time looking at this. Looks as if development 
activity is dying down and potential stability issues
DM-WriteBoost - Great performance, really suits RBD requirements. Unfortunately 
the ram buffering part seems to not play safe with iSCSI use.

 

With something like flashcache on a RBD I can max out the SSD with small 
sequential write IO's and it then passes them to the RBD in nice large IO's. 
Bursty random writes

Re: [ceph-users] Ceph and EnhanceIO cache

2015-06-26 Thread Andrei Mikhailovsky
Hi Nick, 

I've played with Flashcache and EnhanceIO, but I've decided not to use it for 
production in the end. The reason was that using both has increased the amount 
of slow requests that I had on the cluster and I have also noticed somewhat 
higher level of iowait on the vms. At that time, I didn't have much time to 
investigate the slow requests issue and I wasn't sure what exactly is causing 
them. All I can say is that after disabling the caching the slow requests have 
stopped. 

Perhaps others could share if they had any issues. 

THanks 

- Original Message -

> From: "Nick Fisk" 
> To: "Dominik Zalewski" 
> Cc: ceph-users@lists.ceph.com
> Sent: Friday, 26 June, 2015 11:12:25 AM
> Subject: Re: [ceph-users] Ceph and EnhanceIO cache

> I think flashcache bombs out, I must admit I have tested that yet, but as I
> would only be running it in writecache mode, there is no requirement I can
> think of for it to keep on running gracefully.

> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Dominik Zalewski
> Sent: 26 June 2015 10:54
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph and EnhanceIO cache

> Thanks for your reply.

> Do you know by any chance how flashcache handles SSD going offline?

> Here is an snip from enhanceio wiki page:

> Failure of an SSD device in read-only and write-through modes is
> handled gracefully by allowing I/O to continue to/from the
> source volume. An application may notice a drop in performance but it
> will not receive any I/O errors.
> Failure of an SSD device in write-back mode obviously results in the
> loss of dirty blocks in the cache. To guard against this data loss, two
> SSD devices can be mirrored via RAID 1.
> EnhanceIO identifies device failures based on error codes. Depending on
> whether the failure is likely to be intermittent or permanent, it takes
> the best suited action.
> Looking at mailing list and github commits, both flashcache and enhanceio had
> not much going on since last  year.
> Dominik

> On Fri, Jun 26, 2015 at 10:28 AM, Nick Fisk < n...@fisk.me.uk > wrote:
> > > -Original Message-
> 
> > > From: ceph-users [mailto: ceph-users-boun...@lists.ceph.com ] On Behalf
> > > Of
> 
> > > Dominik Zalewski
> 
> > > Sent: 26 June 2015 09:59
> 
> > > To: ceph-users@lists.ceph.com
> 
> > > Subject: [ceph-users] Ceph and EnhanceIO cache
> 
> > >
> 
> > > Hi,
> 
> > >
> 
> > > I came across this blog post mentioning using EnhanceIO (fork of
> > > flashcache)
> 
> > > as cache for OSDs.
> 
> > >
> 
> > > http://xo4t.mj.am/link/xo4t/jgu895v/1/DnECCniu-HWfTojpLN1IqA/aHR0cDovL3d3dy5zZWJhc3RpZW4taGFuLmZyL2Jsb2cvMjAxNC8xMC8wNi9jZXBoLWFuZC1lbmhhbmNlaW8v
> 
> > >
> 
> > > http://xo4t.mj.am/link/xo4t/jgu895v/2/FTxs29ShRIqNOekTAo2wKw/aHR0cHM6Ly9naXRodWIuY29tL3N0ZWMtaW5jL0VuaGFuY2VJTw
> 
> > >
> 
> > > I'm planning to test it with 5x 1TB HGST Travelstar 7k1000 2.5inch OSDs
> > > and
> 
> > > using 256GB Transcend SSD as enhanceio cache.
> 
> > >
> 
> > > I'm wondering if anyone is using EnhanceIO or others like bcache,
> > > dm-cache
> 
> > > with Ceph in production and what is your experience/results.
> 

> > Not using in production, but have been testing all of the above both
> > caching
> > the OSD and RBD's.
> 

> > If your RBD's are being used in scenarios where small sync writes are
> > important (iscsi,database's) then caching the RBD's is almost essential. My
> > findings:-
> 

> > FlashCache - Probably the best of the bunch. Has sequential override and
> > hopefully facebook will continue to maintain it
> 
> > EnhanceIO - Nice that you can hot add the cache, but is likely to no longer
> > be maintained, so risky for production
> 
> > DMCache - Well maintained, but biggest problem is that it only caches
> > writes
> > for blocks that are already in cache
> 
> > Bcache - Didn't really spend much time looking at this. Looks as if
> > development activity is dying down and potential stability issues
> 
> > DM-WriteBoost - Great performance, really suits RBD requirements.
> > Unfortunately the ram buffering part seems to not play safe with iSCSI use.
> 

> > With something like flashcache on a RBD I can max out the SSD with small
> > sequential write IO's and it then passes them to the RBD in nice large
> > IO's.
> > Bursty random writes also benefit.
> 

> > Caching th

Re: [ceph-users] Ceph and EnhanceIO cache

2015-06-26 Thread Dominik Zalewski
Here are some recent benchmarks on bcache in Linux kernel 4.1

http://www.phoronix.com/scan.php?page=article&item=linux-41-bcache&num=1
On Fri, 26 Jun 2015 at 11:12, Nick Fisk  wrote:

> I think flashcache bombs out, I must admit I have tested that yet, but as
> I would only be running it in writecache mode, there is no requirement I
> can think of for it to keep on running gracefully.
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Dominik Zalewski
> *Sent:* 26 June 2015 10:54
> *To:* Nick Fisk
> *Cc:* ceph-users@lists.ceph.com
> *Subject:* Re: [ceph-users] Ceph and EnhanceIO cache
>
>
>
> Thanks for your reply.
>
>
>
> Do you know by any chance how flashcache handles SSD going offline?
>
>
>
> Here is an snip from enhanceio wiki page:
>
>
>
> Failure of an SSD device in read-only and write-through modes is
>
>   handled gracefully by allowing I/O to continue to/from the
>
>   source volume. An application may notice a drop in performance 
> but it
>
>   will not receive any I/O errors.
>
>
>
>   Failure of an SSD device in write-back mode obviously results 
> in the
>
>   loss of dirty blocks in the cache. To guard against this data 
> loss, two
>
>   SSD devices can be mirrored via RAID 1.
>
>
>
>   EnhanceIO identifies device failures based on error codes. 
> Depending on
>
>   whether the failure is likely to be intermittent or permanent, 
> it takes
>
>   the best suited action.
>
>
>
> Looking at mailing list and github commits, both flashcache and enhanceio had 
> not much going on since last  year.
>
>
>
> Dominik
>
>
>
> On Fri, Jun 26, 2015 at 10:28 AM, Nick Fisk  wrote:
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Dominik Zalewski
> > Sent: 26 June 2015 09:59
> > To: ceph-users@lists.ceph.com
> > Subject: [ceph-users] Ceph and EnhanceIO cache
> >
> > Hi,
> >
> > I came across this blog post mentioning using EnhanceIO (fork of
> flashcache)
> > as cache for OSDs.
> >
>
> >
> http://xo4t.mj.am/link/xo4t/jgu89z1/1/cdr6ly7c3QuQHguxILKOvQ/aHR0cDovL3d3dy5zZWJhc3RpZW4taGFuLmZyL2Jsb2cvMjAxNC8xMC8wNi9jZXBoLWFuZC1lbmhhbmNlaW8v
> >
> >
> http://xo4t.mj.am/link/xo4t/jgu89z1/2/jneqhzCbW8BNS2K_vOBBVw/aHR0cHM6Ly9naXRodWIuY29tL3N0ZWMtaW5jL0VuaGFuY2VJTw
>
>
> >
> > I'm planning to test it with 5x 1TB HGST Travelstar 7k1000 2.5inch OSDs
> and
> > using 256GB Transcend SSD as enhanceio cache.
> >
> > I'm wondering if anyone is using EnhanceIO or others like bcache,
> dm-cache
> > with Ceph in production and what is your experience/results.
>
> Not using in production, but have been testing all of the above both
> caching the OSD and RBD's.
>
> If your RBD's are being used in scenarios where small sync writes are
> important (iscsi,database's) then caching the RBD's is almost essential. My
> findings:-
>
> FlashCache - Probably the best of the bunch. Has sequential override and
> hopefully facebook will continue to maintain it
> EnhanceIO - Nice that you can hot add the cache, but is likely to no
> longer be maintained, so risky for production
> DMCache - Well maintained, but biggest problem is that it only caches
> writes for blocks that are already in cache
> Bcache - Didn't really spend much time looking at this. Looks as if
> development activity is dying down and potential stability issues
> DM-WriteBoost - Great performance, really suits RBD requirements.
> Unfortunately the ram buffering part seems to not play safe with iSCSI use.
>
> With something like flashcache on a RBD I can max out the SSD with small
> sequential write IO's and it then passes them to the RBD in nice large
> IO's. Bursty random writes also benefit.
>
> Caching the OSD's, or more specifically a small section where the levelDB
> lives can greatly improve small block write performance. Flashcache is best
> for this as you can limit the sequential cutoff to the levelDB transaction
> size. DMcache is potentially an option as well. You can probably halve OSD
> latency by doing this.
>
> >
> > Thanks
> >
> > Dominik
>
>
>
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and EnhanceIO cache

2015-06-26 Thread Nick Fisk
I think flashcache bombs out, I must admit I have tested that yet, but as I 
would only be running it in writecache mode, there is no requirement I can 
think of for it to keep on running gracefully.

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
Dominik Zalewski
Sent: 26 June 2015 10:54
To: Nick Fisk
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph and EnhanceIO cache

 

Thanks for your reply. 

 

Do you know by any chance how flashcache handles SSD going offline?

 

Here is an snip from enhanceio wiki page:

 

Failure of an SSD device in read-only and write-through modes is

  handled gracefully by allowing I/O to continue to/from the
  source volume. An application may notice a drop in performance 
but it
  will not receive any I/O errors.
 
  Failure of an SSD device in write-back mode obviously results in 
the
  loss of dirty blocks in the cache. To guard against this data 
loss, two
  SSD devices can be mirrored via RAID 1.
 
  EnhanceIO identifies device failures based on error codes. 
Depending on
  whether the failure is likely to be intermittent or permanent, it 
takes
  the best suited action.
 
Looking at mailing list and github commits, both flashcache and enhanceio had 
not much going on since last  year.
 
Dominik

 

 

On Fri, Jun 26, 2015 at 10:28 AM, Nick Fisk mailto:n...@fisk.me.uk> > wrote:

> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com 
> <mailto:ceph-users-boun...@lists.ceph.com> ] On Behalf Of
> Dominik Zalewski
> Sent: 26 June 2015 09:59
> To: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> 
> Subject: [ceph-users] Ceph and EnhanceIO cache
>
> Hi,
>
> I came across this blog post mentioning using EnhanceIO (fork of flashcache)
> as cache for OSDs.
>
> http://www.sebastien-han.fr/blog/2014/10/06/ceph-and-enhanceio/
>
> https://github.com/stec-inc/EnhanceIO
>
> I'm planning to test it with 5x 1TB HGST Travelstar 7k1000 2.5inch OSDs and
> using 256GB Transcend SSD as enhanceio cache.
>
> I'm wondering if anyone is using EnhanceIO or others like bcache, dm-cache
> with Ceph in production and what is your experience/results.

Not using in production, but have been testing all of the above both caching 
the OSD and RBD's.

If your RBD's are being used in scenarios where small sync writes are important 
(iscsi,database's) then caching the RBD's is almost essential. My findings:-

FlashCache - Probably the best of the bunch. Has sequential override and 
hopefully facebook will continue to maintain it
EnhanceIO - Nice that you can hot add the cache, but is likely to no longer be 
maintained, so risky for production
DMCache - Well maintained, but biggest problem is that it only caches writes 
for blocks that are already in cache
Bcache - Didn't really spend much time looking at this. Looks as if development 
activity is dying down and potential stability issues
DM-WriteBoost - Great performance, really suits RBD requirements. Unfortunately 
the ram buffering part seems to not play safe with iSCSI use.

With something like flashcache on a RBD I can max out the SSD with small 
sequential write IO's and it then passes them to the RBD in nice large IO's. 
Bursty random writes also benefit.

Caching the OSD's, or more specifically a small section where the levelDB lives 
can greatly improve small block write performance. Flashcache is best for this 
as you can limit the sequential cutoff to the levelDB transaction size. DMcache 
is potentially an option as well. You can probably halve OSD latency by doing 
this.

>
> Thanks
>
> Dominik





 




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and EnhanceIO cache

2015-06-26 Thread Dominik Zalewski
Thanks for your reply.

Do you know by any chance how flashcache handles SSD going offline?

Here is an snip from enhanceio wiki page:

 Failure of an SSD device in read-only and write-through modes is

handled gracefully by allowing I/O to continue to/from the
source volume. An application may notice a drop in performance but it
will not receive any I/O errors.

Failure of an SSD device in write-back mode obviously results in the
loss of dirty blocks in the cache. To guard against this data loss, two
SSD devices can be mirrored via RAID 1.

EnhanceIO identifies device failures based on error codes. Depending on
whether the failure is likely to be intermittent or permanent, it takes
the best suited action.


Looking at mailing list and github commits, both flashcache and
enhanceio had not much going on since last  year.


Dominik



On Fri, Jun 26, 2015 at 10:28 AM, Nick Fisk  wrote:

> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Dominik Zalewski
> > Sent: 26 June 2015 09:59
> > To: ceph-users@lists.ceph.com
> > Subject: [ceph-users] Ceph and EnhanceIO cache
> >
> > Hi,
> >
> > I came across this blog post mentioning using EnhanceIO (fork of
> flashcache)
> > as cache for OSDs.
> >
> > http://www.sebastien-han.fr/blog/2014/10/06/ceph-and-enhanceio/
> >
> > https://github.com/stec-inc/EnhanceIO
> >
> > I'm planning to test it with 5x 1TB HGST Travelstar 7k1000 2.5inch OSDs
> and
> > using 256GB Transcend SSD as enhanceio cache.
> >
> > I'm wondering if anyone is using EnhanceIO or others like bcache,
> dm-cache
> > with Ceph in production and what is your experience/results.
>
> Not using in production, but have been testing all of the above both
> caching the OSD and RBD's.
>
> If your RBD's are being used in scenarios where small sync writes are
> important (iscsi,database's) then caching the RBD's is almost essential. My
> findings:-
>
> FlashCache - Probably the best of the bunch. Has sequential override and
> hopefully facebook will continue to maintain it
> EnhanceIO - Nice that you can hot add the cache, but is likely to no
> longer be maintained, so risky for production
> DMCache - Well maintained, but biggest problem is that it only caches
> writes for blocks that are already in cache
> Bcache - Didn't really spend much time looking at this. Looks as if
> development activity is dying down and potential stability issues
> DM-WriteBoost - Great performance, really suits RBD requirements.
> Unfortunately the ram buffering part seems to not play safe with iSCSI use.
>
> With something like flashcache on a RBD I can max out the SSD with small
> sequential write IO's and it then passes them to the RBD in nice large
> IO's. Bursty random writes also benefit.
>
> Caching the OSD's, or more specifically a small section where the levelDB
> lives can greatly improve small block write performance. Flashcache is best
> for this as you can limit the sequential cutoff to the levelDB transaction
> size. DMcache is potentially an option as well. You can probably halve OSD
> latency by doing this.
>
> >
> > Thanks
> >
> > Dominik
>
>
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and EnhanceIO cache

2015-06-26 Thread Nick Fisk
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Dominik Zalewski
> Sent: 26 June 2015 09:59
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] Ceph and EnhanceIO cache
> 
> Hi,
> 
> I came across this blog post mentioning using EnhanceIO (fork of flashcache)
> as cache for OSDs.
> 
> http://www.sebastien-han.fr/blog/2014/10/06/ceph-and-enhanceio/
> 
> https://github.com/stec-inc/EnhanceIO
> 
> I'm planning to test it with 5x 1TB HGST Travelstar 7k1000 2.5inch OSDs and
> using 256GB Transcend SSD as enhanceio cache.
> 
> I'm wondering if anyone is using EnhanceIO or others like bcache, dm-cache
> with Ceph in production and what is your experience/results.

Not using in production, but have been testing all of the above both caching 
the OSD and RBD's.

If your RBD's are being used in scenarios where small sync writes are important 
(iscsi,database's) then caching the RBD's is almost essential. My findings:-

FlashCache - Probably the best of the bunch. Has sequential override and 
hopefully facebook will continue to maintain it
EnhanceIO - Nice that you can hot add the cache, but is likely to no longer be 
maintained, so risky for production
DMCache - Well maintained, but biggest problem is that it only caches writes 
for blocks that are already in cache
Bcache - Didn't really spend much time looking at this. Looks as if development 
activity is dying down and potential stability issues
DM-WriteBoost - Great performance, really suits RBD requirements. Unfortunately 
the ram buffering part seems to not play safe with iSCSI use. 

With something like flashcache on a RBD I can max out the SSD with small 
sequential write IO's and it then passes them to the RBD in nice large IO's. 
Bursty random writes also benefit.

Caching the OSD's, or more specifically a small section where the levelDB lives 
can greatly improve small block write performance. Flashcache is best for this 
as you can limit the sequential cutoff to the levelDB transaction size. DMcache 
is potentially an option as well. You can probably halve OSD latency by doing 
this.

> 
> Thanks
> 
> Dominik




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com