!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
--
*Jason Villalta*
Co-founder
[image: Inline image 1]
800.799.4407x1230 | www.RubixTechnology.comhttp://www.rubixtechnology.com/
inline
OSDs/Nodes. I am not sure there is a specific metric in ceph for this
but it would be awesome if there was.
On Sat, Apr 12, 2014 at 10:37 AM, Greg Poirier greg.poir...@opower.comwrote:
Curious as to how you define cluster latency.
On Sat, Apr 12, 2014 at 7:21 AM, Jason Villalta ja
Has anyone tried scaling a VMs io by adding additional disks and striping
them in the guest os? I am curious what effect this would have on io
performance?
___
ceph-users mailing list
ceph-users@lists.ceph.com
://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
--
*Jason Villalta*
Co-founder
[image: Inline image 1]
800.799.4407x1230
I found this without much effort.
http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/
On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta ja...@rubixnet.com wrote:
I also would be interested in how bcache or flashcache would integrate.
On Mon, Oct 7, 2013 at 11:34 AM
caching for writes.
On Mon, Oct 7, 2013 at 11:43 AM, Jason Villalta ja...@rubixnet.com wrote:
I found this without much effort.
http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/
On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta ja...@rubixnet.comwrote:
I also
those
to pull from three SSD disks on a local machine atleast as fast one Native
SDD test. But I don't see that, its actually slower.
On Wed, Sep 18, 2013 at 4:02 PM, Jason Villalta ja...@rubixnet.com wrote:
Thank Mike,
High hopes right ;)
I guess we are not doing too bad compared to you
synchronous / non-cached read, you
should probably specify 'iflag=direct'.
On Friday, September 20, 2013, Jason Villalta wrote:
Mike,
So I do have to ask, where would the extra latency be coming from if all
my OSDs are on the same machine that my test VM is running on? I have
tried every SSD
the speed be the same or would the read speed be a factor of 10
less than the speed of the underlying disk?
On Wed, Sep 18, 2013 at 4:27 AM, Alex Bligh a...@alex.org.uk wrote:
On 17 Sep 2013, at 21:47, Jason Villalta wrote:
dd if=ddbenchfile of=/dev/null bs=8K
819200 bytes (8.2 GB
Any other thoughts on this thread guys. I am just crazy to want near
native SSD performance on a small SSD cluster?
On Wed, Sep 18, 2013 at 8:21 AM, Jason Villalta ja...@rubixnet.com wrote:
That dd give me this.
dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K
819200 bytes
MB/s
dd if=/dev/zero of=1g bs=1M count=1024 oflag=dsync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 37.4144 s, 28.7 MB/s
As you can see, latency is a killer.
On Sep 18, 2013, at 3:23 PM, Jason Villalta ja...@rubixnet.com wrote:
Any other thoughts on this thread
performance closer to
native performance with 8K blocks?
Thanks in advance.
--
--
*Jason Villalta*
Co-founder
[image: Inline image 1]
800.799.4407x1230 | www.RubixTechnology.comhttp://www.rubixtechnology.com/
___
ceph-users mailing list
ceph-users
17, 2013 at 10:56 AM, Campbell, Bill
bcampb...@axcess-financial.com wrote:
Windows default (NTFS) is a 4k block. Are you changing the allocation
unit to 8k as a default for your configuration?
--
*From: *Gregory Farnum g...@inktank.com
*To: *Jason Villalta ja
disks or on the
same disk as the OSD? What is the replica size of your pool?
--
*From: *Jason Villalta ja...@rubixnet.com
*To: *Bill Campbell bcampb...@axcess-financial.com
*Cc: *Gregory Farnum g...@inktank.com, ceph-users
ceph-users@lists.ceph.com
*Sent
asks for them by itself using directIO or
frequent fsync or whatever) your performance will go way up.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Sep 17, 2013 at 1:47 PM, Jason Villalta ja...@rubixnet.com
wrote:
Here are the stats with direct io.
dd
going to hinge on replica
size and journal location. Are your journals on separate disks or on the
same disk as the OSD? What is the replica size of your pool?
--
*From: *Jason Villalta ja...@rubixnet.com
*To: *Bill Campbell bcampb...@axcess-financial.com
*Cc
to say it would make
sense to just use SSD for the journal and a spindel disk for data and read.
On Tue, Sep 17, 2013 at 5:12 PM, Jason Villalta ja...@rubixnet.com wrote:
Here are the results:
dd of=ddbenchfile if=/dev/zero bs=8K count=100 oflag=dsync
819200 bytes (8.2 GB) copied
17 matches
Mail list logo