Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-24 Thread Adrian Hunter
On 24/10/17 10:39, Ulf Hansson wrote:
> [...]
> 
>>> However, you have completely ignored mine, Linus and Bartlomiej's
>>> comments about that we want the blkmq port being a separate patch(es)
>>> and then make the CMDQ patches on top. This worries me, because it
>>> seems like our messages don't reach you.
>>
>> Rubbish!  I gave a very good reason for keeping the CQE code in - it is
>> designed to work together.  I also pointed out that it is trivial to see the
>> CQE code and that it is all '+' lines anyway.
> 
> You gave reasons, but none of us bought them.
> 
>>
>> But not one question in response!  Where is a single example of why it is
>> difficult like it is.  Where are the questions!  Not even a request for
>> documentation!  How I am supposed to know what you do or don't understand if
>> you don't ask any questions! There is no evidence that you guys have read a
>> single line!
> 
> I have and I have also tested it, finding it not working. As reported.
> 
> However, I have also told you that I am having a *hard time* to review
> it, because it implements both blkmq and CMDQ in the same patch to
> code changes get complex.
> 
>>
>> So, what are your plans for the patches?  What don't you understand?
> 
> I have told you this several time, so has Linus and Bartlomiej.
> 
> If you can split it up such the blkmq support comes first, then I can
> review/test and pick it up.

I have done the split, but of course the code is just the same.


Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-24 Thread Adrian Hunter
On 24/10/17 08:37, Ulf Hansson wrote:
> + Bartlomiej
> 
> [...]
> 
>> So my conclusion is, let's start a as you suggested, by not completing
>> the request in ->done() as to maintain existing behavior. Then we can
>> address optimizations on top, which very likely will involve doing
>> changes to host drivers as well.
>
> Have you tested the latest version now?
>

 Ping?
>>>
>>> Still ping?
>>
>> How is your silence in any way an acceptable way to execute your
>> responsibilities as maintainer!
> 
> Seriously? You posted the new version Oct 13.

When you constantly go silent it looks like you are deliberately delaying
the patches - which you have a long history of doing.  Are you deliberately
delaying again?

What are you plans for the patches?

Why do you think mmc work is so unimportant it can be left to whenever you
deign to get around to it?

> 
> I had to make some late minute travel decisions, so unfortunate I
> won't be able to test this on HW from this Friday.
> 
> However, you have completely ignored mine, Linus and Bartlomiej's
> comments about that we want the blkmq port being a separate patch(es)
> and then make the CMDQ patches on top. This worries me, because it
> seems like our messages don't reach you.

Rubbish!  I gave a very good reason for keeping the CQE code in - it is
designed to work together.  I also pointed out that it is trivial to see the
CQE code and that it is all '+' lines anyway.

But not one question in response!  Where is a single example of why it is
difficult like it is.  Where are the questions!  Not even a request for
documentation!  How I am supposed to know what you do or don't understand if
you don't ask any questions! There is no evidence that you guys have read a
single line!

So, what are your plans for the patches?  What don't you understand?


Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-23 Thread Ulf Hansson
+ Bartlomiej

[...]

> So my conclusion is, let's start a as you suggested, by not completing
> the request in ->done() as to maintain existing behavior. Then we can
> address optimizations on top, which very likely will involve doing
> changes to host drivers as well.

 Have you tested the latest version now?

>>>
>>> Ping?
>>
>> Still ping?
>
> How is your silence in any way an acceptable way to execute your
> responsibilities as maintainer!

Seriously? You posted the new version Oct 13.

I had to make some late minute travel decisions, so unfortunate I
won't be able to test this on HW from this Friday.

However, you have completely ignored mine, Linus and Bartlomiej's
comments about that we want the blkmq port being a separate patch(es)
and then make the CMDQ patches on top. This worries me, because it
seems like our messages don't reach you.

Kind regards
Uffe


Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-23 Thread Adrian Hunter
On 20/10/17 15:30, Adrian Hunter wrote:
> On 19/10/17 14:44, Adrian Hunter wrote:
>> On 18/10/17 09:16, Adrian Hunter wrote:
>>> On 11/10/17 16:58, Ulf Hansson wrote:
 On 11 October 2017 at 14:58, Adrian Hunter  wrote:
> On 11/10/17 15:13, Ulf Hansson wrote:
>> On 10 October 2017 at 15:31, Adrian Hunter  
>> wrote:
>>> On 10/10/17 16:08, Ulf Hansson wrote:
 [...]


 I have also run some test on my ux500 board and enabling the blkmq
 path via the new MMC Kconfig option. My idea was to run some iozone
 comparisons between the legacy path and the new blkmq path, but I 
 just
 couldn't get to that point because of the following errors.

 I am using a Kingston 4GB SDHC card, which is detected and mounted
 nicely. However, when I decide to do some writes to the card I get 
 the
 following errors.

 root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 
 conv=fsync
 [  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!

 I haven't yet got the point of investigating this any further, and
 unfortunate I have a busy schedule with traveling next week. I 
 will do
 my best to look into this as soon as I can.

 Perhaps you have some ideas?
>>>
>>> The behaviour depends on whether you have MMC_CAP_WAIT_WHILE_BUSY. 
>>> Try
>>> changing that and see if it makes a difference.
>>
>> Yes, it does! I disabled MMC_CAP_WAIT_WHILE_BUSY (and its
>> corresponding code in mmci.c) and the errors goes away.
>>
>> When I use MMC_CAP_WAIT_WHILE_BUSY I get these problems:
>>
>> [  223.820983] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  224.815795] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  226.034881] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  227.112884] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  227.220275] mmc0: Card stuck in wrong state! mmcblk0 
>> mmc_blk_card_stuck
>> [  228.686798] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  229.892150] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  231.031890] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  232.239013] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> 5000+0 records in
>> 5000+0 records out
>> root@ME:/mnt/sdcard
>>
>> I looked at the new blkmq code from patch v10 13/15. It seems like 
>> the
>> MMC_CAP_WAIT_WHILE_BUSY is used to determine whether the async 
>> request
>> mechanism should be used or not. Perhaps I didn't looked close 
>> enough,
>> but maybe you could elaborate on why this seems to be the case!?
>
> MMC_CAP_WAIT_WHILE_BUSY is necessary because it means that a data 
> transfer
> request has finished when the host controller calls 
> mmc_request_done(). i.e.
> polling the card is not necessary.

 Well, that is a rather big change on its own. Earlier we polled with
 CMD13 to verify that the card has moved back to the transfer state, in
 

Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-19 Thread Adrian Hunter
On 18/10/17 09:16, Adrian Hunter wrote:
> On 11/10/17 16:58, Ulf Hansson wrote:
>> On 11 October 2017 at 14:58, Adrian Hunter  wrote:
>>> On 11/10/17 15:13, Ulf Hansson wrote:
 On 10 October 2017 at 15:31, Adrian Hunter  wrote:
> On 10/10/17 16:08, Ulf Hansson wrote:
>> [...]
>>
>>
>> I have also run some test on my ux500 board and enabling the blkmq
>> path via the new MMC Kconfig option. My idea was to run some iozone
>> comparisons between the legacy path and the new blkmq path, but I 
>> just
>> couldn't get to that point because of the following errors.
>>
>> I am using a Kingston 4GB SDHC card, which is detected and mounted
>> nicely. However, when I decide to do some writes to the card I get 
>> the
>> following errors.
>>
>> root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 
>> conv=fsync
>> [  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>> [  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA 
>> transfer!
>>
>> I haven't yet got the point of investigating this any further, and
>> unfortunate I have a busy schedule with traveling next week. I will 
>> do
>> my best to look into this as soon as I can.
>>
>> Perhaps you have some ideas?
>
> The behaviour depends on whether you have MMC_CAP_WAIT_WHILE_BUSY. Try
> changing that and see if it makes a difference.

 Yes, it does! I disabled MMC_CAP_WAIT_WHILE_BUSY (and its
 corresponding code in mmci.c) and the errors goes away.

 When I use MMC_CAP_WAIT_WHILE_BUSY I get these problems:

 [  223.820983] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  224.815795] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  226.034881] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  227.112884] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  227.220275] mmc0: Card stuck in wrong state! mmcblk0 
 mmc_blk_card_stuck
 [  228.686798] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  229.892150] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  231.031890] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  232.239013] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 5000+0 records in
 5000+0 records out
 root@ME:/mnt/sdcard

 I looked at the new blkmq code from patch v10 13/15. It seems like the
 MMC_CAP_WAIT_WHILE_BUSY is used to determine whether the async request
 mechanism should be used or not. Perhaps I didn't looked close enough,
 but maybe you could elaborate on why this seems to be the case!?
>>>
>>> MMC_CAP_WAIT_WHILE_BUSY is necessary because it means that a data 
>>> transfer
>>> request has finished when the host controller calls mmc_request_done(). 
>>> i.e.
>>> polling the card is not necessary.
>>
>> Well, that is a rather big change on its own. Earlier we polled with
>> CMD13 to verify that the card has moved back to the transfer state, in
>> case it was a write. And that was no matter of MMC_CAP_WAIT_WHILE_BUSY
>> was set or not. Right!?
>
> Yes
>
>>
>> I am not sure it's a good idea to bypass that validation, it seems
>> fragile to rely only on the busy detection on DAT line for writes.
>
> Can you cite something from the specifications that 

Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-18 Thread Adrian Hunter
On 11/10/17 16:58, Ulf Hansson wrote:
> On 11 October 2017 at 14:58, Adrian Hunter  wrote:
>> On 11/10/17 15:13, Ulf Hansson wrote:
>>> On 10 October 2017 at 15:31, Adrian Hunter  wrote:
 On 10/10/17 16:08, Ulf Hansson wrote:
> [...]
>
>
> I have also run some test on my ux500 board and enabling the blkmq
> path via the new MMC Kconfig option. My idea was to run some iozone
> comparisons between the legacy path and the new blkmq path, but I just
> couldn't get to that point because of the following errors.
>
> I am using a Kingston 4GB SDHC card, which is detected and mounted
> nicely. However, when I decide to do some writes to the card I get the
> following errors.
>
> root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 
> conv=fsync
> [  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
>
> I haven't yet got the point of investigating this any further, and
> unfortunate I have a busy schedule with traveling next week. I will do
> my best to look into this as soon as I can.
>
> Perhaps you have some ideas?

 The behaviour depends on whether you have MMC_CAP_WAIT_WHILE_BUSY. Try
 changing that and see if it makes a difference.
>>>
>>> Yes, it does! I disabled MMC_CAP_WAIT_WHILE_BUSY (and its
>>> corresponding code in mmci.c) and the errors goes away.
>>>
>>> When I use MMC_CAP_WAIT_WHILE_BUSY I get these problems:
>>>
>>> [  223.820983] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  224.815795] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  226.034881] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  227.112884] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  227.220275] mmc0: Card stuck in wrong state! mmcblk0 
>>> mmc_blk_card_stuck
>>> [  228.686798] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  229.892150] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  231.031890] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  232.239013] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> 5000+0 records in
>>> 5000+0 records out
>>> root@ME:/mnt/sdcard
>>>
>>> I looked at the new blkmq code from patch v10 13/15. It seems like the
>>> MMC_CAP_WAIT_WHILE_BUSY is used to determine whether the async request
>>> mechanism should be used or not. Perhaps I didn't looked close enough,
>>> but maybe you could elaborate on why this seems to be the case!?
>>
>> MMC_CAP_WAIT_WHILE_BUSY is necessary because it means that a data 
>> transfer
>> request has finished when the host controller calls mmc_request_done(). 
>> i.e.
>> polling the card is not necessary.
>
> Well, that is a rather big change on its own. Earlier we polled with
> CMD13 to verify that the card has moved back to the transfer state, in
> case it was a write. And that was no matter of MMC_CAP_WAIT_WHILE_BUSY
> was set or not. Right!?

 Yes

>
> I am not sure it's a good idea to bypass that validation, it seems
> fragile to rely only on the busy detection on DAT line for writes.

 Can you cite something from the specifications that backs that up, because 
 I
 couldn't find anything to suggest that CMD13 polling was expected.
>>>
>>> No I can't, but I don't see why that matters.
>>>
>>> My point is, if we want to go down that road by avoiding the CMD13
>>> polling, that needs to 

Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-13 Thread Adrian Hunter
On 11/10/17 16:58, Ulf Hansson wrote:
> On 11 October 2017 at 14:58, Adrian Hunter  wrote:
>> On 11/10/17 15:13, Ulf Hansson wrote:
>>> On 10 October 2017 at 15:31, Adrian Hunter  wrote:
 On 10/10/17 16:08, Ulf Hansson wrote:
> [...]
>
>
> I have also run some test on my ux500 board and enabling the blkmq
> path via the new MMC Kconfig option. My idea was to run some iozone
> comparisons between the legacy path and the new blkmq path, but I just
> couldn't get to that point because of the following errors.
>
> I am using a Kingston 4GB SDHC card, which is detected and mounted
> nicely. However, when I decide to do some writes to the card I get the
> following errors.
>
> root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 
> conv=fsync
> [  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
> [  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA 
> transfer!
>
> I haven't yet got the point of investigating this any further, and
> unfortunate I have a busy schedule with traveling next week. I will do
> my best to look into this as soon as I can.
>
> Perhaps you have some ideas?

 The behaviour depends on whether you have MMC_CAP_WAIT_WHILE_BUSY. Try
 changing that and see if it makes a difference.
>>>
>>> Yes, it does! I disabled MMC_CAP_WAIT_WHILE_BUSY (and its
>>> corresponding code in mmci.c) and the errors goes away.
>>>
>>> When I use MMC_CAP_WAIT_WHILE_BUSY I get these problems:
>>>
>>> [  223.820983] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  224.815795] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  226.034881] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  227.112884] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  227.220275] mmc0: Card stuck in wrong state! mmcblk0 
>>> mmc_blk_card_stuck
>>> [  228.686798] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  229.892150] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  231.031890] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  232.239013] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> 5000+0 records in
>>> 5000+0 records out
>>> root@ME:/mnt/sdcard
>>>
>>> I looked at the new blkmq code from patch v10 13/15. It seems like the
>>> MMC_CAP_WAIT_WHILE_BUSY is used to determine whether the async request
>>> mechanism should be used or not. Perhaps I didn't looked close enough,
>>> but maybe you could elaborate on why this seems to be the case!?
>>
>> MMC_CAP_WAIT_WHILE_BUSY is necessary because it means that a data 
>> transfer
>> request has finished when the host controller calls mmc_request_done(). 
>> i.e.
>> polling the card is not necessary.
>
> Well, that is a rather big change on its own. Earlier we polled with
> CMD13 to verify that the card has moved back to the transfer state, in
> case it was a write. And that was no matter of MMC_CAP_WAIT_WHILE_BUSY
> was set or not. Right!?

 Yes

>
> I am not sure it's a good idea to bypass that validation, it seems
> fragile to rely only on the busy detection on DAT line for writes.

 Can you cite something from the specifications that backs that up, because 
 I
 couldn't find anything to suggest that CMD13 polling was expected.
>>>
>>> No I can't, but I don't see why that matters.
>>>
>>> My point is, if we want to go down that road by avoiding the CMD13
>>> polling, that needs to 

Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-12 Thread Ulf Hansson
On 12 October 2017 at 10:08, Linus Walleij  wrote:
> On Wed, Oct 11, 2017 at 3:58 PM, Ulf Hansson  wrote:
>
>> Actually completing the request in the ->done callback, may still be
>> possible, because in principle it only needs to inform the other
>> prepared request that it may start, before it continues to post
>> process/completes the current one.
>
> I try to do that in this patch:
> https://marc.info/?l=linux-mmc=148665460925587=2

Yeah, something like that!

>
> I'm trying to rebase this work now.

Great!

>
>> However, by looking at for example how mmci.c works, it actually holds
>> its spinlock while it calls mmc_request_done(). The same spinlock is
>> taken in the ->request() function, but not in the ->post_req()
>> function. In other words, completing the request in the ->done()
>> callback, would make mmci to keep the spinlock held throughout the
>> post processing cycle, which then prevents the next request from being
>> started.
>
> It seems my patch would not deliver in some drivers until we look
> into the locking semantics in the drivers.

Yes!

As a matter of fact the above change may actually introduce
performance regressions, in case the behavior of the driver doesn't
follow the expectations from the core. For that reason, either we have
to invent a new MMC cap to allow drivers to opt-in using the
"shortcut" or fix host drivers first.

Kind regards
Uffe


Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-12 Thread Linus Walleij
On Wed, Oct 11, 2017 at 3:58 PM, Ulf Hansson  wrote:

> Actually completing the request in the ->done callback, may still be
> possible, because in principle it only needs to inform the other
> prepared request that it may start, before it continues to post
> process/completes the current one.

I try to do that in this patch:
https://marc.info/?l=linux-mmc=148665460925587=2

I'm trying to rebase this work now.

> However, by looking at for example how mmci.c works, it actually holds
> its spinlock while it calls mmc_request_done(). The same spinlock is
> taken in the ->request() function, but not in the ->post_req()
> function. In other words, completing the request in the ->done()
> callback, would make mmci to keep the spinlock held throughout the
> post processing cycle, which then prevents the next request from being
> started.

It seems my patch would not deliver in some drivers until we look
into the locking semantics in the drivers.

Yours,
Linus Walleij


Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-11 Thread Ulf Hansson
On 11 October 2017 at 14:58, Adrian Hunter  wrote:
> On 11/10/17 15:13, Ulf Hansson wrote:
>> On 10 October 2017 at 15:31, Adrian Hunter  wrote:
>>> On 10/10/17 16:08, Ulf Hansson wrote:
 [...]


 I have also run some test on my ux500 board and enabling the blkmq
 path via the new MMC Kconfig option. My idea was to run some iozone
 comparisons between the legacy path and the new blkmq path, but I just
 couldn't get to that point because of the following errors.

 I am using a Kingston 4GB SDHC card, which is detected and mounted
 nicely. However, when I decide to do some writes to the card I get the
 following errors.

 root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 
 conv=fsync
 [  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!
 [  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA 
 transfer!

 I haven't yet got the point of investigating this any further, and
 unfortunate I have a busy schedule with traveling next week. I will do
 my best to look into this as soon as I can.

 Perhaps you have some ideas?
>>>
>>> The behaviour depends on whether you have MMC_CAP_WAIT_WHILE_BUSY. Try
>>> changing that and see if it makes a difference.
>>
>> Yes, it does! I disabled MMC_CAP_WAIT_WHILE_BUSY (and its
>> corresponding code in mmci.c) and the errors goes away.
>>
>> When I use MMC_CAP_WAIT_WHILE_BUSY I get these problems:
>>
>> [  223.820983] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  224.815795] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  226.034881] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  227.112884] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  227.220275] mmc0: Card stuck in wrong state! mmcblk0 
>> mmc_blk_card_stuck
>> [  228.686798] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  229.892150] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  231.031890] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  232.239013] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> 5000+0 records in
>> 5000+0 records out
>> root@ME:/mnt/sdcard
>>
>> I looked at the new blkmq code from patch v10 13/15. It seems like the
>> MMC_CAP_WAIT_WHILE_BUSY is used to determine whether the async request
>> mechanism should be used or not. Perhaps I didn't looked close enough,
>> but maybe you could elaborate on why this seems to be the case!?
>
> MMC_CAP_WAIT_WHILE_BUSY is necessary because it means that a data transfer
> request has finished when the host controller calls mmc_request_done(). 
> i.e.
> polling the card is not necessary.

 Well, that is a rather big change on its own. Earlier we polled with
 CMD13 to verify that the card has moved back to the transfer state, in
 case it was a write. And that was no matter of MMC_CAP_WAIT_WHILE_BUSY
 was set or not. Right!?
>>>
>>> Yes
>>>

 I am not sure it's a good idea to bypass that validation, it seems
 fragile to rely only on the busy detection on DAT line for writes.
>>>
>>> Can you cite something from the specifications that backs that up, because I
>>> couldn't find anything to suggest that CMD13 polling was expected.
>>
>> No I can't, but I don't see why that matters.
>>
>> My point is, if we want to go down that road by avoiding the CMD13
>> polling, that needs to be a separate change, which we can test and
>> confirm on its own.
>>
>>>

>
> Have you tried V9 or V10.  There was a fix in V9 related to 

Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-11 Thread Adrian Hunter
On 11/10/17 15:13, Ulf Hansson wrote:
> On 10 October 2017 at 15:31, Adrian Hunter  wrote:
>> On 10/10/17 16:08, Ulf Hansson wrote:
>>> [...]
>>>
>>>
>>> I have also run some test on my ux500 board and enabling the blkmq
>>> path via the new MMC Kconfig option. My idea was to run some iozone
>>> comparisons between the legacy path and the new blkmq path, but I just
>>> couldn't get to that point because of the following errors.
>>>
>>> I am using a Kingston 4GB SDHC card, which is detected and mounted
>>> nicely. However, when I decide to do some writes to the card I get the
>>> following errors.
>>>
>>> root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 
>>> conv=fsync
>>> [  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>>
>>> I haven't yet got the point of investigating this any further, and
>>> unfortunate I have a busy schedule with traveling next week. I will do
>>> my best to look into this as soon as I can.
>>>
>>> Perhaps you have some ideas?
>>
>> The behaviour depends on whether you have MMC_CAP_WAIT_WHILE_BUSY. Try
>> changing that and see if it makes a difference.
>
> Yes, it does! I disabled MMC_CAP_WAIT_WHILE_BUSY (and its
> corresponding code in mmci.c) and the errors goes away.
>
> When I use MMC_CAP_WAIT_WHILE_BUSY I get these problems:
>
> [  223.820983] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  224.815795] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  226.034881] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  227.112884] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  227.220275] mmc0: Card stuck in wrong state! mmcblk0 mmc_blk_card_stuck
> [  228.686798] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  229.892150] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  231.031890] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  232.239013] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> 5000+0 records in
> 5000+0 records out
> root@ME:/mnt/sdcard
>
> I looked at the new blkmq code from patch v10 13/15. It seems like the
> MMC_CAP_WAIT_WHILE_BUSY is used to determine whether the async request
> mechanism should be used or not. Perhaps I didn't looked close enough,
> but maybe you could elaborate on why this seems to be the case!?

 MMC_CAP_WAIT_WHILE_BUSY is necessary because it means that a data transfer
 request has finished when the host controller calls mmc_request_done(). 
 i.e.
 polling the card is not necessary.
>>>
>>> Well, that is a rather big change on its own. Earlier we polled with
>>> CMD13 to verify that the card has moved back to the transfer state, in
>>> case it was a write. And that was no matter of MMC_CAP_WAIT_WHILE_BUSY
>>> was set or not. Right!?
>>
>> Yes
>>
>>>
>>> I am not sure it's a good idea to bypass that validation, it seems
>>> fragile to rely only on the busy detection on DAT line for writes.
>>
>> Can you cite something from the specifications that backs that up, because I
>> couldn't find anything to suggest that CMD13 polling was expected.
> 
> No I can't, but I don't see why that matters.
> 
> My point is, if we want to go down that road by avoiding the CMD13
> polling, that needs to be a separate change, which we can test and
> confirm on its own.
> 
>>
>>>

 Have you tried V9 or V10.  There was a fix in V9 related to calling
 ->post_req() which could mess up DMA.
>>>
>>> I have used V10.
>>>

 The other thing that could go wrong with DMA is if it cannot accept
 ->post_req() being called from mmc_request_done().
>>>
>>> I don't think mmci has a problem with that, however why do you want to
>>> do this? Wouldn't that 

Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-11 Thread Ulf Hansson
On 10 October 2017 at 15:31, Adrian Hunter  wrote:
> On 10/10/17 16:08, Ulf Hansson wrote:
>> [...]
>>
>>
>> I have also run some test on my ux500 board and enabling the blkmq
>> path via the new MMC Kconfig option. My idea was to run some iozone
>> comparisons between the legacy path and the new blkmq path, but I just
>> couldn't get to that point because of the following errors.
>>
>> I am using a Kingston 4GB SDHC card, which is detected and mounted
>> nicely. However, when I decide to do some writes to the card I get the
>> following errors.
>>
>> root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 
>> conv=fsync
>> [  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>
>> I haven't yet got the point of investigating this any further, and
>> unfortunate I have a busy schedule with traveling next week. I will do
>> my best to look into this as soon as I can.
>>
>> Perhaps you have some ideas?
>
> The behaviour depends on whether you have MMC_CAP_WAIT_WHILE_BUSY. Try
> changing that and see if it makes a difference.

 Yes, it does! I disabled MMC_CAP_WAIT_WHILE_BUSY (and its
 corresponding code in mmci.c) and the errors goes away.

 When I use MMC_CAP_WAIT_WHILE_BUSY I get these problems:

 [  223.820983] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  224.815795] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  226.034881] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  227.112884] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  227.220275] mmc0: Card stuck in wrong state! mmcblk0 mmc_blk_card_stuck
 [  228.686798] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  229.892150] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  231.031890] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  232.239013] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 5000+0 records in
 5000+0 records out
 root@ME:/mnt/sdcard

 I looked at the new blkmq code from patch v10 13/15. It seems like the
 MMC_CAP_WAIT_WHILE_BUSY is used to determine whether the async request
 mechanism should be used or not. Perhaps I didn't looked close enough,
 but maybe you could elaborate on why this seems to be the case!?
>>>
>>> MMC_CAP_WAIT_WHILE_BUSY is necessary because it means that a data transfer
>>> request has finished when the host controller calls mmc_request_done(). i.e.
>>> polling the card is not necessary.
>>
>> Well, that is a rather big change on its own. Earlier we polled with
>> CMD13 to verify that the card has moved back to the transfer state, in
>> case it was a write. And that was no matter of MMC_CAP_WAIT_WHILE_BUSY
>> was set or not. Right!?
>
> Yes
>
>>
>> I am not sure it's a good idea to bypass that validation, it seems
>> fragile to rely only on the busy detection on DAT line for writes.
>
> Can you cite something from the specifications that backs that up, because I
> couldn't find anything to suggest that CMD13 polling was expected.

No I can't, but I don't see why that matters.

My point is, if we want to go down that road by avoiding the CMD13
polling, that needs to be a separate change, which we can test and
confirm on its own.

>
>>
>>>
>>> Have you tried V9 or V10.  There was a fix in V9 related to calling
>>> ->post_req() which could mess up DMA.
>>
>> I have used V10.
>>
>>>
>>> The other thing that could go wrong with DMA is if it cannot accept
>>> ->post_req() being called from mmc_request_done().
>>
>> I don't think mmci has a problem with that, however why do you want to
>> do this? Wouldn't that defeat some of the benefits with the async
>> request mechanism?
>
> Perhaps - but it would need to be tested.  If there are more requests
> waiting, one 

Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-10 Thread Adrian Hunter
On 10/10/17 16:08, Ulf Hansson wrote:
> [...]
> 
>
> I have also run some test on my ux500 board and enabling the blkmq
> path via the new MMC Kconfig option. My idea was to run some iozone
> comparisons between the legacy path and the new blkmq path, but I just
> couldn't get to that point because of the following errors.
>
> I am using a Kingston 4GB SDHC card, which is detected and mounted
> nicely. However, when I decide to do some writes to the card I get the
> following errors.
>
> root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 
> conv=fsync
> [  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>
> I haven't yet got the point of investigating this any further, and
> unfortunate I have a busy schedule with traveling next week. I will do
> my best to look into this as soon as I can.
>
> Perhaps you have some ideas?

 The behaviour depends on whether you have MMC_CAP_WAIT_WHILE_BUSY. Try
 changing that and see if it makes a difference.
>>>
>>> Yes, it does! I disabled MMC_CAP_WAIT_WHILE_BUSY (and its
>>> corresponding code in mmci.c) and the errors goes away.
>>>
>>> When I use MMC_CAP_WAIT_WHILE_BUSY I get these problems:
>>>
>>> [  223.820983] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  224.815795] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  226.034881] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  227.112884] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  227.220275] mmc0: Card stuck in wrong state! mmcblk0 mmc_blk_card_stuck
>>> [  228.686798] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  229.892150] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  231.031890] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  232.239013] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> 5000+0 records in
>>> 5000+0 records out
>>> root@ME:/mnt/sdcard
>>>
>>> I looked at the new blkmq code from patch v10 13/15. It seems like the
>>> MMC_CAP_WAIT_WHILE_BUSY is used to determine whether the async request
>>> mechanism should be used or not. Perhaps I didn't looked close enough,
>>> but maybe you could elaborate on why this seems to be the case!?
>>
>> MMC_CAP_WAIT_WHILE_BUSY is necessary because it means that a data transfer
>> request has finished when the host controller calls mmc_request_done(). i.e.
>> polling the card is not necessary.
> 
> Well, that is a rather big change on its own. Earlier we polled with
> CMD13 to verify that the card has moved back to the transfer state, in
> case it was a write. And that was no matter of MMC_CAP_WAIT_WHILE_BUSY
> was set or not. Right!?

Yes

> 
> I am not sure it's a good idea to bypass that validation, it seems
> fragile to rely only on the busy detection on DAT line for writes.

Can you cite something from the specifications that backs that up, because I
couldn't find anything to suggest that CMD13 polling was expected.

> 
>>
>> Have you tried V9 or V10.  There was a fix in V9 related to calling
>> ->post_req() which could mess up DMA.
> 
> I have used V10.
> 
>>
>> The other thing that could go wrong with DMA is if it cannot accept
>> ->post_req() being called from mmc_request_done().
> 
> I don't think mmci has a problem with that, however why do you want to
> do this? Wouldn't that defeat some of the benefits with the async
> request mechanism?

Perhaps - but it would need to be tested.  If there are more requests
waiting, one optimization could be to defer ->post_req() until after the
next request is started.


Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-10 Thread Ulf Hansson
[...]


 I have also run some test on my ux500 board and enabling the blkmq
 path via the new MMC Kconfig option. My idea was to run some iozone
 comparisons between the legacy path and the new blkmq path, but I just
 couldn't get to that point because of the following errors.

 I am using a Kingston 4GB SDHC card, which is detected and mounted
 nicely. However, when I decide to do some writes to the card I get the
 following errors.

 root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 
 conv=fsync
 [  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
 [  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!

 I haven't yet got the point of investigating this any further, and
 unfortunate I have a busy schedule with traveling next week. I will do
 my best to look into this as soon as I can.

 Perhaps you have some ideas?
>>>
>>> The behaviour depends on whether you have MMC_CAP_WAIT_WHILE_BUSY. Try
>>> changing that and see if it makes a difference.
>>
>> Yes, it does! I disabled MMC_CAP_WAIT_WHILE_BUSY (and its
>> corresponding code in mmci.c) and the errors goes away.
>>
>> When I use MMC_CAP_WAIT_WHILE_BUSY I get these problems:
>>
>> [  223.820983] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  224.815795] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  226.034881] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  227.112884] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  227.220275] mmc0: Card stuck in wrong state! mmcblk0 mmc_blk_card_stuck
>> [  228.686798] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  229.892150] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  231.031890] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  232.239013] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> 5000+0 records in
>> 5000+0 records out
>> root@ME:/mnt/sdcard
>>
>> I looked at the new blkmq code from patch v10 13/15. It seems like the
>> MMC_CAP_WAIT_WHILE_BUSY is used to determine whether the async request
>> mechanism should be used or not. Perhaps I didn't looked close enough,
>> but maybe you could elaborate on why this seems to be the case!?
>
> MMC_CAP_WAIT_WHILE_BUSY is necessary because it means that a data transfer
> request has finished when the host controller calls mmc_request_done(). i.e.
> polling the card is not necessary.

Well, that is a rather big change on its own. Earlier we polled with
CMD13 to verify that the card has moved back to the transfer state, in
case it was a write. And that was no matter of MMC_CAP_WAIT_WHILE_BUSY
was set or not. Right!?

I am not sure it's a good idea to bypass that validation, it seems
fragile to rely only on the busy detection on DAT line for writes.

>
> Have you tried V9 or V10.  There was a fix in V9 related to calling
> ->post_req() which could mess up DMA.

I have used V10.

>
> The other thing that could go wrong with DMA is if it cannot accept
> ->post_req() being called from mmc_request_done().

I don't think mmci has a problem with that, however why do you want to
do this? Wouldn't that defeat some of the benefits with the async
request mechanism?

Kind regards
Uffe


Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-10 Thread Adrian Hunter
On 10/10/17 15:12, Ulf Hansson wrote:
> On 21 September 2017 at 11:44, Adrian Hunter  wrote:
>> On 21/09/17 12:01, Ulf Hansson wrote:
>>> On 13 September 2017 at 13:40, Adrian Hunter  
>>> wrote:
 Hi

 Here is V8 of the hardware command queue patches without the software
 command queue patches, now using blk-mq and now with blk-mq support for
 non-CQE I/O.

 After the unacceptable debacle of the last release cycle, I expect an
 immediate response to these patches.

 HW CMDQ offers 25% - 50% better random multi-threaded I/O.  I see a slight
 2% drop in sequential read speed but no change to sequential write.

 Non-CQE blk-mq showed a 3% decrease in sequential read performance.  This
 seemed to be coming from the inferior latency of running work items 
 compared
 with a dedicated thread.  Hacking blk-mq workqueue to be unbound reduced 
 the
 performance degradation from 3% to 1%.

 While we should look at changing blk-mq to give better workqueue 
 performance,
 a bigger gain is likely to be made by adding a new host API to enable the
 next already-prepared request to be issued directly from within ->done()
 callback of the current request.
>>>
>>> Adrian, I am reviewing this series, however let me comment on each
>>> change individually.
>>>
>>> I have also run some test on my ux500 board and enabling the blkmq
>>> path via the new MMC Kconfig option. My idea was to run some iozone
>>> comparisons between the legacy path and the new blkmq path, but I just
>>> couldn't get to that point because of the following errors.
>>>
>>> I am using a Kingston 4GB SDHC card, which is detected and mounted
>>> nicely. However, when I decide to do some writes to the card I get the
>>> following errors.
>>>
>>> root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 
>>> conv=fsync
>>> [  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>> [  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>>
>>> I haven't yet got the point of investigating this any further, and
>>> unfortunate I have a busy schedule with traveling next week. I will do
>>> my best to look into this as soon as I can.
>>>
>>> Perhaps you have some ideas?
>>
>> The behaviour depends on whether you have MMC_CAP_WAIT_WHILE_BUSY. Try
>> changing that and see if it makes a difference.
> 
> Yes, it does! I disabled MMC_CAP_WAIT_WHILE_BUSY (and its
> corresponding code in mmci.c) and the errors goes away.
> 
> When I use MMC_CAP_WAIT_WHILE_BUSY I get these problems:
> 
> [  223.820983] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  224.815795] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  226.034881] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  227.112884] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  227.220275] mmc0: Card stuck in wrong state! mmcblk0 mmc_blk_card_stuck
> [  228.686798] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  229.892150] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  231.031890] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  232.239013] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> 5000+0 records in
> 5000+0 records out
> root@ME:/mnt/sdcard
> 
> I looked at the new blkmq code from patch v10 13/15. It seems like the
> MMC_CAP_WAIT_WHILE_BUSY is used to determine whether the async request
> mechanism should be used or not. Perhaps I didn't looked close enough,
> but maybe you could elaborate on why this seems to be the case!?

MMC_CAP_WAIT_WHILE_BUSY is necessary because it means that a data transfer
request has finished when the host controller calls mmc_request_done(). i.e.
polling the card is not necessary.

Have you tried V9 or V10.  There was a fix in V9 related to calling
->post_req() which could mess up DMA.

The other thing that could go wrong with DMA is if it cannot accept
->post_req() being called 

Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-10-10 Thread Ulf Hansson
On 21 September 2017 at 11:44, Adrian Hunter  wrote:
> On 21/09/17 12:01, Ulf Hansson wrote:
>> On 13 September 2017 at 13:40, Adrian Hunter  wrote:
>>> Hi
>>>
>>> Here is V8 of the hardware command queue patches without the software
>>> command queue patches, now using blk-mq and now with blk-mq support for
>>> non-CQE I/O.
>>>
>>> After the unacceptable debacle of the last release cycle, I expect an
>>> immediate response to these patches.
>>>
>>> HW CMDQ offers 25% - 50% better random multi-threaded I/O.  I see a slight
>>> 2% drop in sequential read speed but no change to sequential write.
>>>
>>> Non-CQE blk-mq showed a 3% decrease in sequential read performance.  This
>>> seemed to be coming from the inferior latency of running work items compared
>>> with a dedicated thread.  Hacking blk-mq workqueue to be unbound reduced the
>>> performance degradation from 3% to 1%.
>>>
>>> While we should look at changing blk-mq to give better workqueue 
>>> performance,
>>> a bigger gain is likely to be made by adding a new host API to enable the
>>> next already-prepared request to be issued directly from within ->done()
>>> callback of the current request.
>>
>> Adrian, I am reviewing this series, however let me comment on each
>> change individually.
>>
>> I have also run some test on my ux500 board and enabling the blkmq
>> path via the new MMC Kconfig option. My idea was to run some iozone
>> comparisons between the legacy path and the new blkmq path, but I just
>> couldn't get to that point because of the following errors.
>>
>> I am using a Kingston 4GB SDHC card, which is detected and mounted
>> nicely. However, when I decide to do some writes to the card I get the
>> following errors.
>>
>> root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 conv=fsync
>> [  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>> [  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
>>
>> I haven't yet got the point of investigating this any further, and
>> unfortunate I have a busy schedule with traveling next week. I will do
>> my best to look into this as soon as I can.
>>
>> Perhaps you have some ideas?
>
> The behaviour depends on whether you have MMC_CAP_WAIT_WHILE_BUSY. Try
> changing that and see if it makes a difference.

Yes, it does! I disabled MMC_CAP_WAIT_WHILE_BUSY (and its
corresponding code in mmci.c) and the errors goes away.

When I use MMC_CAP_WAIT_WHILE_BUSY I get these problems:

[  223.820983] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  224.815795] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  226.034881] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  227.112884] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  227.220275] mmc0: Card stuck in wrong state! mmcblk0 mmc_blk_card_stuck
[  228.686798] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  229.892150] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  231.031890] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  232.239013] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
5000+0 records in
5000+0 records out
root@ME:/mnt/sdcard

I looked at the new blkmq code from patch v10 13/15. It seems like the
MMC_CAP_WAIT_WHILE_BUSY is used to determine whether the async request
mechanism should be used or not. Perhaps I didn't looked close enough,
but maybe you could elaborate on why this seems to be the case!?

Kind regards
Uffe


Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-09-21 Thread Christoph Hellwig
On Wed, Sep 13, 2017 at 02:40:00PM +0300, Adrian Hunter wrote:
> Non-CQE blk-mq showed a 3% decrease in sequential read performance.  This
> seemed to be coming from the inferior latency of running work items compared
> with a dedicated thread.  Hacking blk-mq workqueue to be unbound reduced the
> performance degradation from 3% to 1%.

Can you start a discussion on just this on the linux-block lists?



Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-09-21 Thread Adrian Hunter
On 21/09/17 12:01, Ulf Hansson wrote:
> On 13 September 2017 at 13:40, Adrian Hunter  wrote:
>> Hi
>>
>> Here is V8 of the hardware command queue patches without the software
>> command queue patches, now using blk-mq and now with blk-mq support for
>> non-CQE I/O.
>>
>> After the unacceptable debacle of the last release cycle, I expect an
>> immediate response to these patches.
>>
>> HW CMDQ offers 25% - 50% better random multi-threaded I/O.  I see a slight
>> 2% drop in sequential read speed but no change to sequential write.
>>
>> Non-CQE blk-mq showed a 3% decrease in sequential read performance.  This
>> seemed to be coming from the inferior latency of running work items compared
>> with a dedicated thread.  Hacking blk-mq workqueue to be unbound reduced the
>> performance degradation from 3% to 1%.
>>
>> While we should look at changing blk-mq to give better workqueue performance,
>> a bigger gain is likely to be made by adding a new host API to enable the
>> next already-prepared request to be issued directly from within ->done()
>> callback of the current request.
> 
> Adrian, I am reviewing this series, however let me comment on each
> change individually.
> 
> I have also run some test on my ux500 board and enabling the blkmq
> path via the new MMC Kconfig option. My idea was to run some iozone
> comparisons between the legacy path and the new blkmq path, but I just
> couldn't get to that point because of the following errors.
> 
> I am using a Kingston 4GB SDHC card, which is detected and mounted
> nicely. However, when I decide to do some writes to the card I get the
> following errors.
> 
> root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 conv=fsync
> [  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> [  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
> 
> I haven't yet got the point of investigating this any further, and
> unfortunate I have a busy schedule with traveling next week. I will do
> my best to look into this as soon as I can.
> 
> Perhaps you have some ideas?

The behaviour depends on whether you have MMC_CAP_WAIT_WHILE_BUSY. Try
changing that and see if it makes a difference.


Re: [PATCH V8 00/14] mmc: Add Command Queue support

2017-09-21 Thread Ulf Hansson
On 13 September 2017 at 13:40, Adrian Hunter  wrote:
> Hi
>
> Here is V8 of the hardware command queue patches without the software
> command queue patches, now using blk-mq and now with blk-mq support for
> non-CQE I/O.
>
> After the unacceptable debacle of the last release cycle, I expect an
> immediate response to these patches.
>
> HW CMDQ offers 25% - 50% better random multi-threaded I/O.  I see a slight
> 2% drop in sequential read speed but no change to sequential write.
>
> Non-CQE blk-mq showed a 3% decrease in sequential read performance.  This
> seemed to be coming from the inferior latency of running work items compared
> with a dedicated thread.  Hacking blk-mq workqueue to be unbound reduced the
> performance degradation from 3% to 1%.
>
> While we should look at changing blk-mq to give better workqueue performance,
> a bigger gain is likely to be made by adding a new host API to enable the
> next already-prepared request to be issued directly from within ->done()
> callback of the current request.

Adrian, I am reviewing this series, however let me comment on each
change individually.

I have also run some test on my ux500 board and enabling the blkmq
path via the new MMC Kconfig option. My idea was to run some iozone
comparisons between the legacy path and the new blkmq path, but I just
couldn't get to that point because of the following errors.

I am using a Kingston 4GB SDHC card, which is detected and mounted
nicely. However, when I decide to do some writes to the card I get the
following errors.

root@ME:/mnt/sdcard dd if=/dev/zero of=testfile bs=8192 count=5000 conv=fsync
[  463.714294] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  464.722656] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  466.081481] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  467.111236] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  468.669647] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  469.685699] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  471.043334] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  472.052337] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  473.342651] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  474.323760] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  475.544769] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  476.539031] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  477.748474] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[  478.724182] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!

I haven't yet got the point of investigating this any further, and
unfortunate I have a busy schedule with traveling next week. I will do
my best to look into this as soon as I can.

Perhaps you have some ideas?

Kind regards
Uffe

>
>
> Changes since V7:
> Re-based
>   mmc: core: Introduce host claiming by context
> Slightly simplified
>   mmc: core: Add parameter use_blk_mq
> New patch.
>   mmc: core: Remove unnecessary host claim
> New patch.
>   mmc: core: Export mmc_start_bkops()
> New patch.
>   mmc: core: Export mmc_start_request()
> New patch.
>   mmc: block: Add CQE and blk-mq support
> Add blk-mq support for non_CQE requests
>
> Changes since V6:
>   mmc: core: Introduce host claiming by context
> New patch.
>   mmc: core: Move mmc_start_areq() declaration
> Dropped because it has been applied
>   mmc: block: Fix block status codes
> Dropped because it has been applied
>   mmc: host: Add CQE interface
> Dropped because it has been applied
>   mmc: core: Turn off CQE before sending commands
> Dropped because it has been applied
>   mmc: block: Factor out mmc_setup_queue()
> New patch.
>   mmc: block: Add CQE support
> Drop legacy support and add blk-mq support
>
> Changes since V5:
> Re-based
>   mmc: core: Add mmc_retune_hold_now()
> Dropped because it has been applied
>   mmc: core: Add members to mmc_request and mmc_data for CQE's
> Dropped because it has been applied
>   mmc: core: Move mmc_start_areq() declaration
> New patch at Ulf's request
>   mmc: block: Fix block status codes
> Another un-related patch
>   mmc: host: Add CQE interface
> Move recovery_notifier() callback to struct mmc_request
>   mmc: core: Add support for handling CQE requests
> Roll __mmc_cqe_request_done() into mmc_cqe_request_done()
> Move function declarations requested by Ulf
>   mmc: core: Remove unused MMC_CAP2_PACKED_CMD
> Dropped because it has been applied
>   mmc: block: Add CQE support
> Add explanation to commit message
> Adjustment for changed recovery_notifier() callback
>   mmc: cqhci: support for command