shadow...@gmail.com, Josh Durgin
josh.dur...@inktank.com, Alexandre DERUMIER aderum...@odiso.com, Sage
Weil s...@inktank.com, ceph-devel ceph-devel@vger.kernel.org
Envoyé: Samedi 3 Novembre 2012 18:09:11
Objet: Re: slow fio random read benchmark, need help
On Thu, Nov 1, 2012 at 6:00 PM
On Thu, Nov 1, 2012 at 6:00 PM, Dietmar Maurer diet...@proxmox.com wrote:
I always thought a distributed block storage could do such things
faster (or at least as fast) than a single centralized store?
That rather depends on what makes up each of them. ;)
On Thu, Nov 1, 2012 at 6:11 AM,
...@proxmox.com, Josh Durgin
josh.dur...@inktank.com, Alexandre DERUMIER aderum...@odiso.com, Marcus
Sorensen shadow...@gmail.com, Sage Weil s...@inktank.com, ceph-devel
ceph-devel@vger.kernel.org
Envoyé: Jeudi 1 Novembre 2012 11:54:03
Objet: Re: slow fio random read benchmark, need help
Am 01.11.2012 11
...@vger.kernel.org] On Behalf Of Alexandre DERUMIER
Sent: Mittwoch, 31. Oktober 2012 18:27
To: Marcus Sorensen
Cc: Sage Weil; ceph-devel
Subject: Re: slow fio random read benchmark, need help
Thanks Marcus,
indeed gigabit ethernet.
note that my iscsi results (40k)was with multipath, so
Am 01.11.2012 08:38, schrieb Dietmar Maurer:
I do not really understand that network latency argument.
If one can get 40K iops with iSCSI, why can't I get the same with rados/ceph?
Note: network latency is the same in both cases
What do I miss?
Good question. Also i've seen 20k iops on ceph
Octobre 2012 18:08:11
Objet: Re: slow fio random read benchmark, need help
5000 is actually really good, if you ask me. Assuming everything is connected
via gigabit. If you get 40k iops locally, you add the latency of tcp, as
well as
that of the ceph services and VM layer, and that's what you get
Am 01.11.2012 11:40, schrieb Gregory Farnum:
I'm not sure that latency addition is quite correct. Most use cases
cases do multiple IOs at the same time, and good benchmarks tend to
reflect that.
I suspect the IO limitations here are a result of QEMU's storage
handling (or possibly our client
[mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Alexandre DERUMIER
Sent: Mittwoch, 31. Oktober 2012 18:27
To: Marcus Sorensen
Cc: Sage Weil; ceph-devel
Subject: Re: slow fio random read benchmark, need help
Thanks Marcus,
indeed gigabit ethernet.
note that my iscsi results (40k
Actually that didn't illustrate my point very well, since you see
individual requests being sent to the driver without waiting for
individual completion, but if you look at the full output you can see
that once the queue is full, you're at the mercy of waiting for
individual IOs to complete before
For the record, I'm not saying that it's the entire reason why the performance
is lower (obviously since iscsi is better), I'm just saying that when you're
talking about high iops, adding 100us (best case gigabit) to each request and
response is significant
iSCSI also uses the network (also
Hello,
I'm doing some tests with fio from a qemu 1.2 guest (virtio disk,cache=none),
randread, with 4K block size on a small size of 1G (so it can be handle by the
buffer cache on ceph cluster)
fio --filename=/dev/vdb -rw=randread --bs=4K --size=1000M --iodepth=40
--group_reporting
On Wed, 31 Oct 2012, Alexandre DERUMIER wrote:
Hello,
I'm doing some tests with fio from a qemu 1.2 guest (virtio disk,cache=none),
randread, with 4K block size on a small size of 1G (so it can be handle by
the buffer cache on ceph cluster)
fio --filename=/dev/vdb -rw=randread --bs=4K
...@odiso.com
Cc: ceph-devel ceph-devel@vger.kernel.org
Envoyé: Mercredi 31 Octobre 2012 16:57:05
Objet: Re: slow fio random read benchmark, need help
On Wed, 31 Oct 2012, Alexandre DERUMIER wrote:
Hello,
I'm doing some tests with fio from a qemu 1.2 guest (virtio disk,cache=none),
randread
,take_sum:0,put:66605,put_sum:10236339,wait:{avgcount:0,sum:0}}}
- Mail original -
De: Alexandre DERUMIER aderum...@odiso.com
À: Sage Weil s...@inktank.com
Cc: ceph-devel ceph-devel@vger.kernel.org
Envoyé: Mercredi 31 Octobre 2012 17:29:28
Objet: Re: slow fio random read benchmark, need help
ok.
Do you have an idea if I can trace something ?
Thanks,
Alexandre
- Mail original -
De: Sage Weil s...@inktank.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-devel ceph-devel@vger.kernel.org
Envoyé: Mercredi 31 Octobre 2012 16:57:05
Objet: Re: slow fio random read
,
Alexandre
- Mail original -
De: Mark Kampe mark.ka...@inktank.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-devel ceph-devel@vger.kernel.org
Envoyé: Mercredi 31 Octobre 2012 17:56:26
Objet: Re: slow fio random read benchmark, need help
I'm a little confused by the math
: Re: slow fio random read benchmark, need help
5000 is actually really good, if you ask me. Assuming everything is
connected via gigabit. If you get 40k iops locally, you add the
latency of tcp, as well as that of the ceph services and VM layer, and
that's what you get. On my network I get
: Sage Weil s...@inktank.com, ceph-devel ceph-devel@vger.kernel.org
Envoyé: Mercredi 31 Octobre 2012 18:08:11
Objet: Re: slow fio random read benchmark, need help
5000 is actually really good, if you ask me. Assuming everything is
connected via gigabit. If you get 40k iops locally, you add
aderum...@odiso.com
Cc: Sage Weil s...@inktank.com, ceph-devel ceph-devel@vger.kernel.org
Envoyé: Mercredi 31 Octobre 2012 18:38:46
Objet: Re: slow fio random read benchmark, need help
Yes, I was going to say that the most I've ever seen out of gigabit is
about 15k iops, with parallel tests
: Mercredi 31 Octobre 2012 18:38:46
Objet: Re: slow fio random read benchmark, need help
Yes, I was going to say that the most I've ever seen out of gigabit is
about 15k iops, with parallel tests and NFS (or iSCSI). Multipathing
may not really parallelize the io for you. It can send an io down one
InfiniBand can help?
- Mail original -
De: Marcus Sorensen shadow...@gmail.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: Sage Weil s...@inktank.com, ceph-devel ceph-devel@vger.kernel.org
Envoyé: Mercredi 31 Octobre 2012 18:38:46
Objet: Re: slow fio random read benchmark, need help
Yes, I
@vger.kernel.org
Envoyé: Mercredi 31 Octobre 2012 18:38:46
Objet: Re: slow fio random read benchmark, need help
Yes, I was going to say that the most I've ever seen out of gigabit is
about 15k iops, with parallel tests and NFS (or iSCSI). Multipathing
may not really parallelize the io for you. It can
...@gmail.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: Sage Weil s...@inktank.com, ceph-devel ceph-devel@vger.kernel.org
Envoyé: Mercredi 31 Octobre 2012 20:50:36
Objet: Re: slow fio random read benchmark, need help
Come to think of it that 15k iops I mentioned was on 10G ethernet with
NFS. I have
23 matches
Mail list logo