Re: [ceph-users] Useful visualizations / metrics

2014-04-12 Thread Jason Villalta
! ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- -- *Jason Villalta* Co-founder [image: Inline image 1] 800.799.4407x1230 | www.RubixTechnology.comhttp://www.rubixtechnology.com/ inline

Re: [ceph-users] Useful visualizations / metrics

2014-04-12 Thread Jason Villalta
OSDs/Nodes. I am not sure there is a specific metric in ceph for this but it would be awesome if there was. On Sat, Apr 12, 2014 at 10:37 AM, Greg Poirier greg.poir...@opower.comwrote: Curious as to how you define cluster latency. On Sat, Apr 12, 2014 at 7:21 AM, Jason Villalta ja

[ceph-users] Rbd image performance

2013-12-12 Thread Jason Villalta
Has anyone tried scaling a VMs io by adding additional disks and striping them in the guest os? I am curious what effect this would have on io performance? ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] About Ceph SSD and HDD strategy

2013-10-07 Thread Jason Villalta
://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- -- *Jason Villalta* Co-founder [image: Inline image 1] 800.799.4407x1230

Re: [ceph-users] About Ceph SSD and HDD strategy

2013-10-07 Thread Jason Villalta
I found this without much effort. http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/ On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta ja...@rubixnet.com wrote: I also would be interested in how bcache or flashcache would integrate. On Mon, Oct 7, 2013 at 11:34 AM

Re: [ceph-users] About Ceph SSD and HDD strategy

2013-10-07 Thread Jason Villalta
caching for writes. On Mon, Oct 7, 2013 at 11:43 AM, Jason Villalta ja...@rubixnet.com wrote: I found this without much effort. http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/ On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta ja...@rubixnet.comwrote: I also

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-20 Thread Jason Villalta
those to pull from three SSD disks on a local machine atleast as fast one Native SDD test. But I don't see that, its actually slower. On Wed, Sep 18, 2013 at 4:02 PM, Jason Villalta ja...@rubixnet.com wrote: Thank Mike, High hopes right ;) I guess we are not doing too bad compared to you

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-20 Thread Jason Villalta
synchronous / non-cached read, you should probably specify 'iflag=direct'. On Friday, September 20, 2013, Jason Villalta wrote: Mike, So I do have to ask, where would the extra latency be coming from if all my OSDs are on the same machine that my test VM is running on? I have tried every SSD

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-18 Thread Jason Villalta
the speed be the same or would the read speed be a factor of 10 less than the speed of the underlying disk? On Wed, Sep 18, 2013 at 4:27 AM, Alex Bligh a...@alex.org.uk wrote: On 17 Sep 2013, at 21:47, Jason Villalta wrote: dd if=ddbenchfile of=/dev/null bs=8K 819200 bytes (8.2 GB

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-18 Thread Jason Villalta
Any other thoughts on this thread guys. I am just crazy to want near native SSD performance on a small SSD cluster? On Wed, Sep 18, 2013 at 8:21 AM, Jason Villalta ja...@rubixnet.com wrote: That dd give me this. dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K 819200 bytes

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-18 Thread Jason Villalta
MB/s dd if=/dev/zero of=1g bs=1M count=1024 oflag=dsync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 37.4144 s, 28.7 MB/s As you can see, latency is a killer. On Sep 18, 2013, at 3:23 PM, Jason Villalta ja...@rubixnet.com wrote: Any other thoughts on this thread

[ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
performance closer to native performance with 8K blocks? Thanks in advance. -- -- *Jason Villalta* Co-founder [image: Inline image 1] 800.799.4407x1230 | www.RubixTechnology.comhttp://www.rubixtechnology.com/ ___ ceph-users mailing list ceph-users

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
17, 2013 at 10:56 AM, Campbell, Bill bcampb...@axcess-financial.com wrote: Windows default (NTFS) is a 4k block. Are you changing the allocation unit to 8k as a default for your configuration? -- *From: *Gregory Farnum g...@inktank.com *To: *Jason Villalta ja

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
disks or on the same disk as the OSD? What is the replica size of your pool? -- *From: *Jason Villalta ja...@rubixnet.com *To: *Bill Campbell bcampb...@axcess-financial.com *Cc: *Gregory Farnum g...@inktank.com, ceph-users ceph-users@lists.ceph.com *Sent

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
asks for them by itself using directIO or frequent fsync or whatever) your performance will go way up. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue, Sep 17, 2013 at 1:47 PM, Jason Villalta ja...@rubixnet.com wrote: Here are the stats with direct io. dd

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
going to hinge on replica size and journal location. Are your journals on separate disks or on the same disk as the OSD? What is the replica size of your pool? -- *From: *Jason Villalta ja...@rubixnet.com *To: *Bill Campbell bcampb...@axcess-financial.com *Cc

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
to say it would make sense to just use SSD for the journal and a spindel disk for data and read. On Tue, Sep 17, 2013 at 5:12 PM, Jason Villalta ja...@rubixnet.com wrote: Here are the results: dd of=ddbenchfile if=/dev/zero bs=8K count=100 oflag=dsync 819200 bytes (8.2 GB) copied