subtle bug causing loss of control over I/O bandwidths
>
Thanks a lot for these patches, Paolo!
Would you mind adding:
Reported-by: Srivatsa S. Bhat (VMware)
Tested-by: Srivatsa S. Bhat (VMware)
to the first 5 patches, as appropriate?
Thank you!
>
> [1] https://lkml.org/lkml/2019/
On 6/12/19 10:46 PM, Paolo Valente wrote:
>
>> Il giorno 12 giu 2019, alle ore 00:34, Srivatsa S. Bhat
>> ha scritto:
>>
[...]
>>
>> Hi Paolo,
>>
>
> Hi
>
>> Hope you are doing great!
>>
>
> Sort of, thanks :)
>
>
On 6/13/19 1:20 AM, Jan Kara wrote:
> On Wed 12-06-19 12:36:53, Srivatsa S. Bhat wrote:
>>
>> [ Adding Greg to CC ]
>>
>> On 6/12/19 6:04 AM, Jan Kara wrote:
>>> On Tue 11-06-19 15:34:48, Srivatsa S. Bhat wrote:
>>>> On 6/2/19 12:04 AM, Srivats
On 6/12/19 11:02 PM, Greg Kroah-Hartman wrote:
> On Wed, Jun 12, 2019 at 12:36:53PM -0700, Srivatsa S. Bhat wrote:
>>
>> [ Adding Greg to CC ]
>>
>> On 6/12/19 6:04 AM, Jan Kara wrote:
>>> On Tue 11-06-19 15:34:48, Srivatsa S. Bhat wrote:
>>>> On 6/2
[ Adding Greg to CC ]
On 6/12/19 6:04 AM, Jan Kara wrote:
> On Tue 11-06-19 15:34:48, Srivatsa S. Bhat wrote:
>> On 6/2/19 12:04 AM, Srivatsa S. Bhat wrote:
>>> On 5/30/19 3:45 AM, Paolo Valente wrote:
>>>>
>> [...]
>>>> At any rate, since you po
On 6/2/19 12:04 AM, Srivatsa S. Bhat wrote:
> On 5/30/19 3:45 AM, Paolo Valente wrote:
>>
[...]
>> At any rate, since you pointed out that you are interested in
>> out-of-the-box performance, let me complete the context: in case
>> low_latency is left set, one gets,
On 5/30/19 3:45 AM, Paolo Valente wrote:
>
>
>> Il giorno 30 mag 2019, alle ore 10:29, Srivatsa S. Bhat
>> ha scritto:
>>
[...]
>>
>> Your fix held up well under my testing :)
>>
>
> Great!
>
>> As for throughput, with low_latency =
On 5/23/19 4:32 PM, Srivatsa S. Bhat wrote:
> On 5/22/19 7:30 PM, Srivatsa S. Bhat wrote:
>> On 5/22/19 3:54 AM, Paolo Valente wrote:
>>>
>>>
>>>> Il giorno 22 mag 2019, alle ore 12:01, Srivatsa S. Bhat
>>>> ha scritto:
>>>>
>>
On 5/29/19 12:41 AM, Paolo Valente wrote:
>
>
>> Il giorno 29 mag 2019, alle ore 03:09, Srivatsa S. Bhat
>> ha scritto:
>>
>> On 5/23/19 11:51 PM, Paolo Valente wrote:
>>>
>>>> Il giorno 24 mag 2019, alle ore 01:43, Srivatsa S. Bhat
>>
On 5/23/19 11:51 PM, Paolo Valente wrote:
>
>> Il giorno 24 mag 2019, alle ore 01:43, Srivatsa S. Bhat
>> ha scritto:
>>
>> When trying to run multiple dd tasks simultaneously, I get the kernel
>> panic shown below (mainline is fine, without these patches).
&g
On 5/23/19 10:22 AM, Paolo Valente wrote:
>
>> Il giorno 23 mag 2019, alle ore 11:19, Paolo Valente
>> ha scritto:
>>
>>> Il giorno 23 mag 2019, alle ore 04:30, Srivatsa S. Bhat
>>> ha scritto:
>>>
[...]
>>> Also, I'm very happy
On 5/22/19 7:30 PM, Srivatsa S. Bhat wrote:
> On 5/22/19 3:54 AM, Paolo Valente wrote:
>>
>>
>>> Il giorno 22 mag 2019, alle ore 12:01, Srivatsa S. Bhat
>>> ha scritto:
>>>
>>> On 5/22/19 2:09 AM, Paolo Valente wrote:
>>>>
>>
On 5/22/19 3:54 AM, Paolo Valente wrote:
>
>
>> Il giorno 22 mag 2019, alle ore 12:01, Srivatsa S. Bhat
>> ha scritto:
>>
>> On 5/22/19 2:09 AM, Paolo Valente wrote:
>>>
>>> First, thank you very much for testing my patches, and, above all, for
On 5/22/19 2:12 AM, Paolo Valente wrote:
>
>> Il giorno 22 mag 2019, alle ore 11:02, Srivatsa S. Bhat
>> ha scritto:
>>
>>
>> Let's continue here on LKML itself.
>
> Just done :)
>
>> The only reason I created the
>> bugzilla entry
On 5/22/19 2:09 AM, Paolo Valente wrote:
>
> First, thank you very much for testing my patches, and, above all, for
> sharing those huge traces!
>
> According to the your traces, the residual 20% lower throughput that you
> record is due to the fact that the BFQ injection mechanism takes a few
>
On 5/22/19 1:05 AM, Paolo Valente wrote:
>
>
>> Il giorno 22 mag 2019, alle ore 00:51, Srivatsa S. Bhat
>> ha scritto:
>>
>> [ Resending this mail with a dropbox link to the traces (instead
>> of a file attachment), since it didn't go through the last
[ Resending this mail with a dropbox link to the traces (instead
of a file attachment), since it didn't go through the last time. ]
On 5/21/19 10:38 AM, Paolo Valente wrote:
>
>> So, instead of only sending me a trace, could you please:
>> 1) apply this new patch on top of the one I attached in m
On 5/20/19 11:23 PM, Paolo Valente wrote:
>
>
>> Il giorno 21 mag 2019, alle ore 00:45, Srivatsa S. Bhat
>> ha scritto:
>>
>> On 5/20/19 3:19 AM, Paolo Valente wrote:
>>>
>>>
>>>> Il giorno 18 mag 2019, alle ore 22:50, Srivatsa S.
On 5/20/19 3:19 AM, Paolo Valente wrote:
>
>
>> Il giorno 18 mag 2019, alle ore 22:50, Srivatsa S. Bhat
>> ha scritto:
>>
>> On 5/18/19 11:39 AM, Paolo Valente wrote:
>>> I've addressed these issues in my last batch of improvements for BFQ,
>>
With bfq, I get:
512 bytes (5.1 MB, 4.9 MiB) copied, 84.8216 s, 60.4 kB/s
Please let me know if any more info about my setup might be helpful.
Thank you!
Regards,
Srivatsa
VMware Photon OS
>
>> Il giorno 18 mag 2019, alle ore 00:16, Srivatsa S. Bhat
>> ha scritto:
>>
&g
Hi,
One of my colleagues noticed upto 10x - 30x drop in I/O throughput
running the following command, with the CFQ I/O scheduler:
dd if=/dev/zero of=/root/test.img bs=512 count=1 oflags=dsync
Throughput with CFQ: 60 KB/s
Throughput with noop or deadline: 1.5 MB/s - 2 MB/s
I spent some tim
On 2/6/18 2:24 AM, Greg KH wrote:
> On Mon, Feb 05, 2018 at 06:25:27PM -0800, Srivatsa S. Bhat wrote:
>> From: Srivatsa S. Bhat
>>
>> register_blkdev() and __register_chrdev_region() treat the major
>> number as an unsigned int. So print it the same way to avoid
>>
From: Srivatsa S. Bhat
CHRDEV_MAJOR_DYN_END and CHRDEV_MAJOR_DYN_EXT_END are valid major
numbers. So fix the loop iteration to include them in the search for
free major numbers.
While at it, also remove a redundant if condition ("cd->major != i"),
as it will never be true.
From: Srivatsa S. Bhat
register_blkdev() and __register_chrdev_region() treat the major
number as an unsigned int. So print it the same way to avoid
absurd error statements such as:
"... major requested (-1) is greater than the maximum (511) ..."
(and also fix off-by-one bugs in the er
24 matches
Mail list logo