OSDs/Nodes. I am not sure there is a specific metric in ceph for this
but it would be awesome if there was.
On Sat, Apr 12, 2014 at 10:37 AM, Greg Poirier wrote:
> Curious as to how you define cluster latency.
>
>
> On Sat, Apr 12, 2014 at 7:21 AM, Jason Villalta wrote:
>
&g
Just looking for some suggestions. Thanks!
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
--
*Jason Villalta*
Co-founder
[image: Inline image 1]
800.799.44
> feel I should be getting significantly more from ceph than what I am able
> to.
>
> Of course, as soon as bcache stops providing benefits (ie data is pushed
> out of the SSD cache) then the raw performance drops to a standard SATA
> drive of around 120 IOPS.
>
> Regards
> --
Thanks for the info everyone.
On Dec 16, 2013 1:23 AM, "Kyle Bader" wrote:
> >> Has anyone tried scaling a VMs io by adding additional disks and
> >> striping them in the guest os? I am curious what effect this would have
> >> on io performance?
>
> > Why would it? You can also change the stripe
Has anyone tried scaling a VMs io by adding additional disks and striping
them in the guest os? I am curious what effect this would have on io
performance?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-user
I too have noticed this as well when using ceph-deploy to configure ceph.
>From what I can tell it just creates symlinks from the default osd location
at /var/lib/ceph. Same for the journal. If it on a different device a
symlink is created from the dir.
Then it appears the osds are just defined i
caching for writes.
On Mon, Oct 7, 2013 at 11:43 AM, Jason Villalta wrote:
> I found this without much effort.
>
> http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/
>
>
> On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta wrote:
>
>> I also
I found this without much effort.
http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/
On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta wrote:
> I also would be interested in how bcache or flashcache would integrate.
>
>
> On Mon, Oct 7, 2013 at 11:3
ach could have the most
> >> advantage.
> >>
> >> Your point of view would definitely help me.
> >>
> >> Sincerely,
> >> Martin
> >>
> >> --
> >> Martin Catudal
> >> Responsable TIC
> >> Ressources Me
her testing
> "dd performance" as opposed to "using dd to test performance") if the
> concern is what to expect for your multi-tenant vm block store.
>
> Personally, I get more bugged out over many-thread random read throughput
> or synchronous write latency.
>
&
e, but assuming you want a solid synchronous / non-cached read, you
> should probably specify 'iflag=direct'.
>
> On Friday, September 20, 2013, Jason Villalta wrote:
>
>> Mike,
>> So I do have to ask, where would the extra latency be coming from if all
>> my OSDs
those
to pull from three SSD disks on a local machine atleast as fast one Native
SDD test. But I don't see that, its actually slower.
On Wed, Sep 18, 2013 at 4:02 PM, Jason Villalta wrote:
> Thank Mike,
> High hopes right ;)
>
> I guess we are not doing too bad compared to
1.1 GB) copied, 6.26289 s, 171 MB/s
> dd if=/dev/zero of=1g bs=1M count=1024 oflag=dsync
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 37.4144 s, 28.7 MB/s
>
> As you can see, latency is a killer.
>
> On Sep 18, 2013, at 3:23 PM, Jason Villalta
Any other thoughts on this thread guys. I am just crazy to want near
native SSD performance on a small SSD cluster?
On Wed, Sep 18, 2013 at 8:21 AM, Jason Villalta wrote:
> That dd give me this.
>
> dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K
> 819200 bytes (8.
the speed be the same or would the read speed be a factor of 10
less than the speed of the underlying disk?
On Wed, Sep 18, 2013 at 4:27 AM, Alex Bligh wrote:
>
> On 17 Sep 2013, at 21:47, Jason Villalta wrote:
>
> > dd if=ddbenchfile of=/dev/null bs=8K
> > 819200
say it would make
sense to just use SSD for the journal and a spindel disk for data and read.
On Tue, Sep 17, 2013 at 5:12 PM, Jason Villalta wrote:
> Here are the results:
>
> dd of=ddbenchfile if=/dev/zero bs=8K count=100 oflag=dsync
> 819200 bytes (8.2 GB) copied, 266.87
;
> RADOS performance from what I've seen is largely going to hinge on replica
> size and journal location. Are your journals on separate disks or on the
> same disk as the OSD? What is the replica size of your pool?
>
> --
> *From: *"Jason Vi
directIO or
> frequent fsync or whatever) your performance will go way up.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Tue, Sep 17, 2013 at 1:47 PM, Jason Villalta
> wrote:
> >
> > Here are the stats with direct io.
> >
of
>> clients, and if you don't force those 8k sync IOs (which RBD won't,
>> unless the application asks for them by itself using directIO or
>> frequent fsync or whatever) your performance will go way up.
>> -Greg
>> Software Engineer #42 @ http://inktank.com | h
You can deploy an osd using ceph deploy to folder. Use ceph-deploy odd
prepare host:/path
On Sep 17, 2013 1:40 PM, "Jordi Arcas" wrote:
> Hi!
> I've a remote server with one unit where is installed Ubuntu. I can't
> create another partition on the disk to install OSD because is mounted.
> There
al location. Are your journals on separate disks or on the
> same disk as the OSD? What is the replica size of your pool?
>
> --
> *From: *"Jason Villalta"
> *To: *"Bill Campbell"
> *Cc: *"Gregory Farnum" , "ceph-user
17, 2013 at 10:56 AM, Campbell, Bill <
bcampb...@axcess-financial.com> wrote:
> Windows default (NTFS) is a 4k block. Are you changing the allocation
> unit to 8k as a default for your configuration?
>
> --
> *From: *"Gregory Farnum"
performance closer to
native performance with 8K blocks?
Thanks in advance.
--
--
*Jason Villalta*
Co-founder
[image: Inline image 1]
800.799.4407x1230 | www.RubixTechnology.com<http://www.rubixtechnology.com/>
___
ceph-users mailing list
ceph
23 matches
Mail list logo