Apologies. Pressed 'send' even before I was done.
On Tue, Jun 20, 2017 at 11:39 AM, Krutika Dhananjay
wrote:
> Some update on this topic:
>
> I ran fio again, this time with Raghavendra's epoll-rearm patch @
> https://review.gluster.org/17391
>
> The IOPs increased to ~50K (from 38K).
> Avg READ
Some update on this topic:
I ran fio again, this time with Raghavendra's epoll-rearm patch @
https://review.gluster.org/17391
The IOPs increased to ~50K (from 38K).
Avg READ latency as seen by the io-stats translator that sits above
client-io-threads came down to 963us (from 1666us).
∆ (2,3) is d
So comparing the key latency, ∆ (2,3), in the two cases:
iodepth=1: 171 us
iodepth=8: 1453 us (in the ballpark of 171*8=1368). That's not good! (I
wonder if that relation roughly holds up for other values of iodepth).
This data doesn't conclusively establish that the problem is in gluster.
You'd
Indeed the latency on the client side dropped with iodepth=1. :)
I ran the test twice and the results were consistent.
Here are the exact numbers:
*Translator Position* *Avg Latency of READ fop as
seen by this translator*
1. parent of client-io-threads437us
utika Dhananjay"
Cc: "Gluster Devel"
Sent: Thursday, June 8, 2017 12:22:19 PM
Subject: Re: [Gluster-devel] Performance experiments with io-stats translator
Thanks. So I was suggesting a repeat of the test but this time with iodepth=1
in the fio job. If reducing the no.
Thanks. So I was suggesting a repeat of the test but this time with
iodepth=1 in the fio job. If reducing the no. of concurrent requests
reduces drastically the high latency you're seeing from the client-side,
that would strengthen the hypothesis than serialization/contention among
concurrent requ
@Xavi/Raghavendra,
Indeed. Even I suspect the mutex contention at epoll layer and I've been
reading the corresponding code (my first time) ever since I got these
numbers.
I will get back to you if I have any specific questions for you around this.
-Krutika
On Thu, Jun 8, 2017 at 9:58 AM, Raghave
Hi,
So I used Sanjay's setup to get these numbers. So I'm guessing it's a 10G
network. I will check again and let you know if that isn't the case.
-Krutika
On Tue, Jun 6, 2017 at 9:38 PM, Vijay Bellur wrote:
> Nice work!
>
> What is the network interconnect bandwidth? How much of the network
>
Hi,
This is what my job file contains:
[global]
ioengine=libaio
#unified_rw_reporting=1
randrepeat=1
norandommap=1
group_reporting
direct=1
runtime=60
thread
size=16g
[workload]
bs=4k
rw=randread
iodepth=8
numjobs=1
file_service_type=random
filename=/perf5/iotest/fio_5
filename=/perf6/iotest/fi
On Wed, Jun 7, 2017 at 11:59 AM, Xavier Hernandez
wrote:
> Hi Krutika,
>
> On 06/06/17 13:35, Krutika Dhananjay wrote:
>
>> Hi,
>>
>> As part of identifying performance bottlenecks within gluster stack for
>> VM image store use-case, I loaded io-stats at multiple points on the
>> client and brick
Hi Krutika,
On 06/06/17 13:35, Krutika Dhananjay wrote:
Hi,
As part of identifying performance bottlenecks within gluster stack for
VM image store use-case, I loaded io-stats at multiple points on the
client and brick stack and ran randrd test using fio from within the
hosted vms in parallel.
Nice work!
What is the network interconnect bandwidth? How much of the network
bandwidth is in use while the test is being run? Wondering if there is
saturation in the network layer.
-Vijay
On Tue, Jun 6, 2017 at 7:35 AM, Krutika Dhananjay
wrote:
> Hi,
>
> As part of identifying performance bo
On Tue, Jun 6, 2017 at 5:05 PM, Krutika Dhananjay
wrote:
> Hi,
>
> As part of identifying performance bottlenecks within gluster stack for VM
> image store use-case, I loaded io-stats at multiple points on the client
> and brick stack and ran randrd test using fio from within the hosted vms in
>
Hi,
As part of identifying performance bottlenecks within gluster stack for VM
image store use-case, I loaded io-stats at multiple points on the client
and brick stack and ran randrd test using fio from within the hosted vms in
parallel.
Before I get to the results, a little bit about the configu
14 matches
Mail list logo