RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-30 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/30/2019, 14:36:39 (UTC+00:00)

> 
> On 30/07/2019 10:39, Jose Abreu wrote:
> 
> ...
> 
> > I looked at netsec implementation and I noticed that we are syncing the 
> > old buffer for device instead of the new one. netsec syncs the buffer 
> > for device immediately after the allocation which may be what we have to 
> > do. Maybe the attached patch can make things work for you ?
> 
> Great! This one works. I have booted this several times and I am no
> longer seeing any issues. Thanks for figuring this out!
> 
> Feel free to add my ...
> 
> Tested-by: Jon Hunter 

This one was hard to find :) Thank you for your patience in testing 
this!

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-30 Thread Jon Hunter


On 30/07/2019 10:39, Jose Abreu wrote:

...

> I looked at netsec implementation and I noticed that we are syncing the 
> old buffer for device instead of the new one. netsec syncs the buffer 
> for device immediately after the allocation which may be what we have to 
> do. Maybe the attached patch can make things work for you ?

Great! This one works. I have booted this several times and I am no
longer seeing any issues. Thanks for figuring this out!

Feel free to add my ...

Tested-by: Jon Hunter 

Cheers
Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-30 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/29/2019, 22:33:04 (UTC+00:00)

> 
> On 29/07/2019 15:08, Jose Abreu wrote:
> 
> ...
> 
> >>> Hi Catalin and Will,
> >>>
> >>> Sorry to add you in such a long thread but we are seeing a DMA issue
> >>> with stmmac driver in an ARM64 platform with IOMMU enabled.
> >>>
> >>> The issue seems to be solved when buffers allocation for DMA based
> >>> transfers are *not* mapped with the DMA_ATTR_SKIP_CPU_SYNC flag *OR*
> >>> when IOMMU is disabled.
> >>>
> >>> Notice that after transfer is done we do use
> >>> dma_sync_single_for_{cpu,device} and then we reuse *the same* page for
> >>> another transfer.
> >>>
> >>> Can you please comment on whether DMA_ATTR_SKIP_CPU_SYNC can not be used
> >>> in ARM64 platforms with IOMMU ?
> >>
> >> In terms of what they do, there should be no difference on arm64 between:
> >>
> >> dma_map_page(..., dir);
> >> ...
> >> dma_unmap_page(..., dir);
> >>
> >> and:
> >>
> >> dma_map_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
> >> dma_sync_single_for_device(..., dir);
> >> ...
> >> dma_sync_single_for_cpu(..., dir);
> >> dma_unmap_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
> >>
> >> provided that the first sync covers the whole buffer and any subsequent 
> >> ones cover at least the parts of the buffer which may have changed. Plus 
> >> for coherent hardware it's entirely moot either way.
> > 
> > Thanks for confirming. That's indeed what stmmac is doing when buffer is 
> > received by syncing the packet size to CPU.
> > 
> >>
> >> Given Jon's previous findings, I would lean towards the idea that 
> >> performing the extra (redundant) cache maintenance plus barrier in 
> >> dma_unmap is mostly just perturbing timing in the same way as the debug 
> >> print which also made things seem OK.
> > 
> > Mikko said that Tegra186 is not coherent so we have to explicit flush 
> > pipeline but I don't understand why sync_single() is not doing it ...
> > 
> > Jon, can you please remove *all* debug prints, hacks, etc ... and test 
> > this one in attach with plain -net tree ?
> 
> So far I have just been testing on the mainline kernel branch. The issue
> still persists after applying this on mainline. I can test on the -net
> tree, but I am not sure that will make a difference.
> 
> Cheers
> Jon
> 
> -- 
> nvpublic

I looked at netsec implementation and I noticed that we are syncing the 
old buffer for device instead of the new one. netsec syncs the buffer 
for device immediately after the allocation which may be what we have to 
do. Maybe the attached patch can make things work for you ?

---
Thanks,
Jose Miguel Abreu


0001-net-stmmac-Sync-RX-Buffer-upon-allocation.patch
Description: 0001-net-stmmac-Sync-RX-Buffer-upon-allocation.patch


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-29 Thread Jon Hunter


On 29/07/2019 15:08, Jose Abreu wrote:

...

>>> Hi Catalin and Will,
>>>
>>> Sorry to add you in such a long thread but we are seeing a DMA issue
>>> with stmmac driver in an ARM64 platform with IOMMU enabled.
>>>
>>> The issue seems to be solved when buffers allocation for DMA based
>>> transfers are *not* mapped with the DMA_ATTR_SKIP_CPU_SYNC flag *OR*
>>> when IOMMU is disabled.
>>>
>>> Notice that after transfer is done we do use
>>> dma_sync_single_for_{cpu,device} and then we reuse *the same* page for
>>> another transfer.
>>>
>>> Can you please comment on whether DMA_ATTR_SKIP_CPU_SYNC can not be used
>>> in ARM64 platforms with IOMMU ?
>>
>> In terms of what they do, there should be no difference on arm64 between:
>>
>> dma_map_page(..., dir);
>> ...
>> dma_unmap_page(..., dir);
>>
>> and:
>>
>> dma_map_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
>> dma_sync_single_for_device(..., dir);
>> ...
>> dma_sync_single_for_cpu(..., dir);
>> dma_unmap_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
>>
>> provided that the first sync covers the whole buffer and any subsequent 
>> ones cover at least the parts of the buffer which may have changed. Plus 
>> for coherent hardware it's entirely moot either way.
> 
> Thanks for confirming. That's indeed what stmmac is doing when buffer is 
> received by syncing the packet size to CPU.
> 
>>
>> Given Jon's previous findings, I would lean towards the idea that 
>> performing the extra (redundant) cache maintenance plus barrier in 
>> dma_unmap is mostly just perturbing timing in the same way as the debug 
>> print which also made things seem OK.
> 
> Mikko said that Tegra186 is not coherent so we have to explicit flush 
> pipeline but I don't understand why sync_single() is not doing it ...
> 
> Jon, can you please remove *all* debug prints, hacks, etc ... and test 
> this one in attach with plain -net tree ?

So far I have just been testing on the mainline kernel branch. The issue
still persists after applying this on mainline. I can test on the -net
tree, but I am not sure that will make a difference.

Cheers
Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-29 Thread Jose Abreu
From: Robin Murphy 
Date: Jul/29/2019, 12:52:02 (UTC+00:00)

> On 29/07/2019 12:29, Jose Abreu wrote:
> > ++ Catalin, Will (ARM64 Maintainers)
> > 
> > From: Jon Hunter 
> > Date: Jul/29/2019, 11:55:18 (UTC+00:00)
> > 
> >>
> >> On 29/07/2019 09:16, Jose Abreu wrote:
> >>> From: Jose Abreu 
> >>> Date: Jul/27/2019, 16:56:37 (UTC+00:00)
> >>>
>  From: Jon Hunter 
>  Date: Jul/26/2019, 15:11:00 (UTC+00:00)
> 
> >
> > On 25/07/2019 16:12, Jose Abreu wrote:
> >> From: Jon Hunter 
> >> Date: Jul/25/2019, 15:25:59 (UTC+00:00)
> >>
> >>>
> >>> On 25/07/2019 14:26, Jose Abreu wrote:
> >>>
> >>> ...
> >>>
>  Well, I wasn't expecting that :/
> 
>  Per documentation of barriers I think we should set descriptor fields
>  and then barrier and finally ownership to HW so that remaining fields
>  are coherent before owner is set.
> 
>  Anyway, can you also add a dma_rmb() after the call to
>  stmmac_rx_status() ?
> >>>
> >>> Yes. I removed the debug print added the barrier, but that did not 
> >>> help.
> >>
> >> So, I was finally able to setup NFS using your replicated setup and I
> >> can't see the issue :(
> >>
> >> The only difference I have from yours is that I'm using TCP in NFS
> >> whilst you (I believe from the logs), use UDP.
> >
> > So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
> > 'proto=tcp' and this does appear to be more stable, but not 100% stable.
> > It still appears to fail in the same place about 50% of the time.
> >
> >> You do have flow control active right ? And your HW FIFO size is >= 4k 
> >> ?
> >
> > How can I verify if flow control is active?
> 
>  You can check it by dumping register MTL_RxQ_Operation_Mode (0xd30).
> >>
> >> Where would be the appropriate place to dump this? After probe? Maybe
> >> best if you can share a code snippet of where to dump this.
> >>
>  Can you also add IOMMU debug in file "drivers/iommu/iommu.c" ?
> >>
> >> You can find a boot log here:
> >>
> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__paste.ubuntu.com_p_qtRqtYKHGF_=DwICaQ=DPL6_X_6JkXFx7AXWqB0tg=WHDsc6kcWAl4i96Vm5hJ_19IJiuxx_p_Rzo2g-uHDKw=NrxsR2etpZHGb7HkN4XdgaGmKM1XYyldihNPL6qVSv0=CMATEcHVoqZw4sIrNOXc7SFE_kV_5CO5EU21-yJez6c=
> >>
> >>> And, please try attached debug patch.
> >>
> >> With this patch it appears to boot fine. So far no issues seen.
> > 
> > Thank you for testing.
> > 
> > Hi Catalin and Will,
> > 
> > Sorry to add you in such a long thread but we are seeing a DMA issue
> > with stmmac driver in an ARM64 platform with IOMMU enabled.
> > 
> > The issue seems to be solved when buffers allocation for DMA based
> > transfers are *not* mapped with the DMA_ATTR_SKIP_CPU_SYNC flag *OR*
> > when IOMMU is disabled.
> > 
> > Notice that after transfer is done we do use
> > dma_sync_single_for_{cpu,device} and then we reuse *the same* page for
> > another transfer.
> > 
> > Can you please comment on whether DMA_ATTR_SKIP_CPU_SYNC can not be used
> > in ARM64 platforms with IOMMU ?
> 
> In terms of what they do, there should be no difference on arm64 between:
> 
> dma_map_page(..., dir);
> ...
> dma_unmap_page(..., dir);
> 
> and:
> 
> dma_map_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
> dma_sync_single_for_device(..., dir);
> ...
> dma_sync_single_for_cpu(..., dir);
> dma_unmap_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
> 
> provided that the first sync covers the whole buffer and any subsequent 
> ones cover at least the parts of the buffer which may have changed. Plus 
> for coherent hardware it's entirely moot either way.

Thanks for confirming. That's indeed what stmmac is doing when buffer is 
received by syncing the packet size to CPU.

> 
> Given Jon's previous findings, I would lean towards the idea that 
> performing the extra (redundant) cache maintenance plus barrier in 
> dma_unmap is mostly just perturbing timing in the same way as the debug 
> print which also made things seem OK.

Mikko said that Tegra186 is not coherent so we have to explicit flush 
pipeline but I don't understand why sync_single() is not doing it ...

Jon, can you please remove *all* debug prints, hacks, etc ... and test 
this one in attach with plain -net tree ?

---
Thanks,
Jose Miguel Abreu


0001-net-stmmac-Flush-all-data-cache-in-RX-path.patch
Description: 0001-net-stmmac-Flush-all-data-cache-in-RX-path.patch


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-29 Thread Robin Murphy

On 29/07/2019 12:29, Jose Abreu wrote:

++ Catalin, Will (ARM64 Maintainers)

From: Jon Hunter 
Date: Jul/29/2019, 11:55:18 (UTC+00:00)



On 29/07/2019 09:16, Jose Abreu wrote:

From: Jose Abreu 
Date: Jul/27/2019, 16:56:37 (UTC+00:00)


From: Jon Hunter 
Date: Jul/26/2019, 15:11:00 (UTC+00:00)



On 25/07/2019 16:12, Jose Abreu wrote:

From: Jon Hunter 
Date: Jul/25/2019, 15:25:59 (UTC+00:00)



On 25/07/2019 14:26, Jose Abreu wrote:

...


Well, I wasn't expecting that :/

Per documentation of barriers I think we should set descriptor fields
and then barrier and finally ownership to HW so that remaining fields
are coherent before owner is set.

Anyway, can you also add a dma_rmb() after the call to
stmmac_rx_status() ?


Yes. I removed the debug print added the barrier, but that did not help.


So, I was finally able to setup NFS using your replicated setup and I
can't see the issue :(

The only difference I have from yours is that I'm using TCP in NFS
whilst you (I believe from the logs), use UDP.


So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
'proto=tcp' and this does appear to be more stable, but not 100% stable.
It still appears to fail in the same place about 50% of the time.


You do have flow control active right ? And your HW FIFO size is >= 4k ?


How can I verify if flow control is active?


You can check it by dumping register MTL_RxQ_Operation_Mode (0xd30).


Where would be the appropriate place to dump this? After probe? Maybe
best if you can share a code snippet of where to dump this.


Can you also add IOMMU debug in file "drivers/iommu/iommu.c" ?


You can find a boot log here:

https://urldefense.proofpoint.com/v2/url?u=https-3A__paste.ubuntu.com_p_qtRqtYKHGF_=DwICaQ=DPL6_X_6JkXFx7AXWqB0tg=WHDsc6kcWAl4i96Vm5hJ_19IJiuxx_p_Rzo2g-uHDKw=NrxsR2etpZHGb7HkN4XdgaGmKM1XYyldihNPL6qVSv0=CMATEcHVoqZw4sIrNOXc7SFE_kV_5CO5EU21-yJez6c=


And, please try attached debug patch.


With this patch it appears to boot fine. So far no issues seen.


Thank you for testing.

Hi Catalin and Will,

Sorry to add you in such a long thread but we are seeing a DMA issue
with stmmac driver in an ARM64 platform with IOMMU enabled.

The issue seems to be solved when buffers allocation for DMA based
transfers are *not* mapped with the DMA_ATTR_SKIP_CPU_SYNC flag *OR*
when IOMMU is disabled.

Notice that after transfer is done we do use
dma_sync_single_for_{cpu,device} and then we reuse *the same* page for
another transfer.

Can you please comment on whether DMA_ATTR_SKIP_CPU_SYNC can not be used
in ARM64 platforms with IOMMU ?


In terms of what they do, there should be no difference on arm64 between:

dma_map_page(..., dir);
...
dma_unmap_page(..., dir);

and:

dma_map_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
dma_sync_single_for_device(..., dir);
...
dma_sync_single_for_cpu(..., dir);
dma_unmap_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);

provided that the first sync covers the whole buffer and any subsequent 
ones cover at least the parts of the buffer which may have changed. Plus 
for coherent hardware it's entirely moot either way.


Given Jon's previous findings, I would lean towards the idea that 
performing the extra (redundant) cache maintenance plus barrier in 
dma_unmap is mostly just perturbing timing in the same way as the debug 
print which also made things seem OK.


Robin.


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-29 Thread Jose Abreu
++ Catalin, Will (ARM64 Maintainers)

From: Jon Hunter 
Date: Jul/29/2019, 11:55:18 (UTC+00:00)

> 
> On 29/07/2019 09:16, Jose Abreu wrote:
> > From: Jose Abreu 
> > Date: Jul/27/2019, 16:56:37 (UTC+00:00)
> > 
> >> From: Jon Hunter 
> >> Date: Jul/26/2019, 15:11:00 (UTC+00:00)
> >>
> >>>
> >>> On 25/07/2019 16:12, Jose Abreu wrote:
>  From: Jon Hunter 
>  Date: Jul/25/2019, 15:25:59 (UTC+00:00)
> 
> >
> > On 25/07/2019 14:26, Jose Abreu wrote:
> >
> > ...
> >
> >> Well, I wasn't expecting that :/
> >>
> >> Per documentation of barriers I think we should set descriptor fields 
> >> and then barrier and finally ownership to HW so that remaining fields 
> >> are coherent before owner is set.
> >>
> >> Anyway, can you also add a dma_rmb() after the call to 
> >> stmmac_rx_status() ?
> >
> > Yes. I removed the debug print added the barrier, but that did not help.
> 
>  So, I was finally able to setup NFS using your replicated setup and I 
>  can't see the issue :(
> 
>  The only difference I have from yours is that I'm using TCP in NFS 
>  whilst you (I believe from the logs), use UDP.
> >>>
> >>> So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
> >>> 'proto=tcp' and this does appear to be more stable, but not 100% stable.
> >>> It still appears to fail in the same place about 50% of the time.
> >>>
>  You do have flow control active right ? And your HW FIFO size is >= 4k ?
> >>>
> >>> How can I verify if flow control is active?
> >>
> >> You can check it by dumping register MTL_RxQ_Operation_Mode (0xd30).
> 
> Where would be the appropriate place to dump this? After probe? Maybe
> best if you can share a code snippet of where to dump this.
> 
> >> Can you also add IOMMU debug in file "drivers/iommu/iommu.c" ?
> 
> You can find a boot log here:
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__paste.ubuntu.com_p_qtRqtYKHGF_=DwICaQ=DPL6_X_6JkXFx7AXWqB0tg=WHDsc6kcWAl4i96Vm5hJ_19IJiuxx_p_Rzo2g-uHDKw=NrxsR2etpZHGb7HkN4XdgaGmKM1XYyldihNPL6qVSv0=CMATEcHVoqZw4sIrNOXc7SFE_kV_5CO5EU21-yJez6c=
>  
> 
> > And, please try attached debug patch.
> 
> With this patch it appears to boot fine. So far no issues seen.

Thank you for testing.

Hi Catalin and Will,

Sorry to add you in such a long thread but we are seeing a DMA issue 
with stmmac driver in an ARM64 platform with IOMMU enabled.

The issue seems to be solved when buffers allocation for DMA based 
transfers are *not* mapped with the DMA_ATTR_SKIP_CPU_SYNC flag *OR* 
when IOMMU is disabled.

Notice that after transfer is done we do use 
dma_sync_single_for_{cpu,device} and then we reuse *the same* page for 
another transfer.

Can you please comment on whether DMA_ATTR_SKIP_CPU_SYNC can not be used 
in ARM64 platforms with IOMMU ?

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-29 Thread Jon Hunter


On 29/07/2019 09:16, Jose Abreu wrote:
> From: Jose Abreu 
> Date: Jul/27/2019, 16:56:37 (UTC+00:00)
> 
>> From: Jon Hunter 
>> Date: Jul/26/2019, 15:11:00 (UTC+00:00)
>>
>>>
>>> On 25/07/2019 16:12, Jose Abreu wrote:
 From: Jon Hunter 
 Date: Jul/25/2019, 15:25:59 (UTC+00:00)

>
> On 25/07/2019 14:26, Jose Abreu wrote:
>
> ...
>
>> Well, I wasn't expecting that :/
>>
>> Per documentation of barriers I think we should set descriptor fields 
>> and then barrier and finally ownership to HW so that remaining fields 
>> are coherent before owner is set.
>>
>> Anyway, can you also add a dma_rmb() after the call to 
>> stmmac_rx_status() ?
>
> Yes. I removed the debug print added the barrier, but that did not help.

 So, I was finally able to setup NFS using your replicated setup and I 
 can't see the issue :(

 The only difference I have from yours is that I'm using TCP in NFS 
 whilst you (I believe from the logs), use UDP.
>>>
>>> So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
>>> 'proto=tcp' and this does appear to be more stable, but not 100% stable.
>>> It still appears to fail in the same place about 50% of the time.
>>>
 You do have flow control active right ? And your HW FIFO size is >= 4k ?
>>>
>>> How can I verify if flow control is active?
>>
>> You can check it by dumping register MTL_RxQ_Operation_Mode (0xd30).

Where would be the appropriate place to dump this? After probe? Maybe
best if you can share a code snippet of where to dump this.

>> Can you also add IOMMU debug in file "drivers/iommu/iommu.c" ?

You can find a boot log here:

https://paste.ubuntu.com/p/qtRqtYKHGF/

> And, please try attached debug patch.

With this patch it appears to boot fine. So far no issues seen.

Jon

-- 
nvpublic


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-29 Thread Mikko Perttunen
My understanding is that Tegra186 does not have DMA coherency, but 
Tegra194 does.


Mikko

On 23.7.2019 16.34, Jon Hunter wrote:


On 23/07/2019 13:51, Jose Abreu wrote:

From: Jon Hunter 
Date: Jul/23/2019, 12:58:55 (UTC+00:00)



On 23/07/2019 11:49, Jose Abreu wrote:

From: Jon Hunter 
Date: Jul/23/2019, 11:38:33 (UTC+00:00)



On 23/07/2019 11:07, Jose Abreu wrote:

From: Jon Hunter 
Date: Jul/23/2019, 11:01:24 (UTC+00:00)


This appears to be a winner and by disabling the SMMU for the ethernet
controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
this worked! So yes appears to be related to the SMMU being enabled. We
had to enable the SMMU for ethernet recently due to commit
954a03be033c7cef80ddc232e7cbdb17df735663.


Finally :)

However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":

+ There are few reasons to allow unmatched stream bypass, and
+ even fewer good ones.  If saying YES here breaks your board
+ you should work on fixing your board.

So, how can we fix this ? Is your ethernet DT node marked as
"dma-coherent;" ?


TBH I have no idea. I can't say I fully understand your change or how it
is breaking things for us.

Currently, the Tegra DT binding does not have 'dma-coherent' set. I see
this is optional, but I am not sure how you determine whether or not
this should be set.


 From my understanding it means that your device / IP DMA accesses are coherent 
regarding the CPU point of view. I think it will be the case if GMAC is not 
behind any kind of IOMMU in the HW arch.


I understand what coherency is, I just don't know how you tell if this
implementation of the ethernet controller is coherent or not.


Do you have any detailed diagram of your HW ? Such as blocks / IPs
connection, address space wiring , ...


Yes, this can be found in the Tegra X2 Technical Reference Manual [0].
Unfortunately, you need to create an account to download it.

Jon

[0] https://developer.nvidia.com/embedded/dlc/parker-series-trm



RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-29 Thread Jose Abreu
From: Jose Abreu 
Date: Jul/27/2019, 16:56:37 (UTC+00:00)

> From: Jon Hunter 
> Date: Jul/26/2019, 15:11:00 (UTC+00:00)
> 
> > 
> > On 25/07/2019 16:12, Jose Abreu wrote:
> > > From: Jon Hunter 
> > > Date: Jul/25/2019, 15:25:59 (UTC+00:00)
> > > 
> > >>
> > >> On 25/07/2019 14:26, Jose Abreu wrote:
> > >>
> > >> ...
> > >>
> > >>> Well, I wasn't expecting that :/
> > >>>
> > >>> Per documentation of barriers I think we should set descriptor fields 
> > >>> and then barrier and finally ownership to HW so that remaining fields 
> > >>> are coherent before owner is set.
> > >>>
> > >>> Anyway, can you also add a dma_rmb() after the call to 
> > >>> stmmac_rx_status() ?
> > >>
> > >> Yes. I removed the debug print added the barrier, but that did not help.
> > > 
> > > So, I was finally able to setup NFS using your replicated setup and I 
> > > can't see the issue :(
> > > 
> > > The only difference I have from yours is that I'm using TCP in NFS 
> > > whilst you (I believe from the logs), use UDP.
> > 
> > So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
> > 'proto=tcp' and this does appear to be more stable, but not 100% stable.
> > It still appears to fail in the same place about 50% of the time.
> > 
> > > You do have flow control active right ? And your HW FIFO size is >= 4k ?
> > 
> > How can I verify if flow control is active?
> 
> You can check it by dumping register MTL_RxQ_Operation_Mode (0xd30).
> 
> Can you also add IOMMU debug in file "drivers/iommu/iommu.c" ?

And, please try attached debug patch.

---
Thanks,
Jose Miguel Abreu


0001-net-page_pool-Do-not-skip-CPU-sync.patch
Description: 0001-net-page_pool-Do-not-skip-CPU-sync.patch


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-27 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/26/2019, 15:11:00 (UTC+00:00)

> 
> On 25/07/2019 16:12, Jose Abreu wrote:
> > From: Jon Hunter 
> > Date: Jul/25/2019, 15:25:59 (UTC+00:00)
> > 
> >>
> >> On 25/07/2019 14:26, Jose Abreu wrote:
> >>
> >> ...
> >>
> >>> Well, I wasn't expecting that :/
> >>>
> >>> Per documentation of barriers I think we should set descriptor fields 
> >>> and then barrier and finally ownership to HW so that remaining fields 
> >>> are coherent before owner is set.
> >>>
> >>> Anyway, can you also add a dma_rmb() after the call to 
> >>> stmmac_rx_status() ?
> >>
> >> Yes. I removed the debug print added the barrier, but that did not help.
> > 
> > So, I was finally able to setup NFS using your replicated setup and I 
> > can't see the issue :(
> > 
> > The only difference I have from yours is that I'm using TCP in NFS 
> > whilst you (I believe from the logs), use UDP.
> 
> So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
> 'proto=tcp' and this does appear to be more stable, but not 100% stable.
> It still appears to fail in the same place about 50% of the time.
> 
> > You do have flow control active right ? And your HW FIFO size is >= 4k ?
> 
> How can I verify if flow control is active?

You can check it by dumping register MTL_RxQ_Operation_Mode (0xd30).

Can you also add IOMMU debug in file "drivers/iommu/iommu.c" ?

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-26 Thread Jon Hunter


On 25/07/2019 16:12, Jose Abreu wrote:
> From: Jon Hunter 
> Date: Jul/25/2019, 15:25:59 (UTC+00:00)
> 
>>
>> On 25/07/2019 14:26, Jose Abreu wrote:
>>
>> ...
>>
>>> Well, I wasn't expecting that :/
>>>
>>> Per documentation of barriers I think we should set descriptor fields 
>>> and then barrier and finally ownership to HW so that remaining fields 
>>> are coherent before owner is set.
>>>
>>> Anyway, can you also add a dma_rmb() after the call to 
>>> stmmac_rx_status() ?
>>
>> Yes. I removed the debug print added the barrier, but that did not help.
> 
> So, I was finally able to setup NFS using your replicated setup and I 
> can't see the issue :(
> 
> The only difference I have from yours is that I'm using TCP in NFS 
> whilst you (I believe from the logs), use UDP.

So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
'proto=tcp' and this does appear to be more stable, but not 100% stable.
It still appears to fail in the same place about 50% of the time.

> You do have flow control active right ? And your HW FIFO size is >= 4k ?

How can I verify if flow control is active?

The documentation for this device indicates a max transfer size of 16kB
for TX and RX.

Cheers
Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-25 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/25/2019, 15:25:59 (UTC+00:00)

> 
> On 25/07/2019 14:26, Jose Abreu wrote:
> 
> ...
> 
> > Well, I wasn't expecting that :/
> > 
> > Per documentation of barriers I think we should set descriptor fields 
> > and then barrier and finally ownership to HW so that remaining fields 
> > are coherent before owner is set.
> > 
> > Anyway, can you also add a dma_rmb() after the call to 
> > stmmac_rx_status() ?
> 
> Yes. I removed the debug print added the barrier, but that did not help.

So, I was finally able to setup NFS using your replicated setup and I 
can't see the issue :(

The only difference I have from yours is that I'm using TCP in NFS 
whilst you (I believe from the logs), use UDP.

You do have flow control active right ? And your HW FIFO size is >= 4k ?

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-25 Thread Jon Hunter


On 25/07/2019 14:26, Jose Abreu wrote:

...

> Well, I wasn't expecting that :/
> 
> Per documentation of barriers I think we should set descriptor fields 
> and then barrier and finally ownership to HW so that remaining fields 
> are coherent before owner is set.
> 
> Anyway, can you also add a dma_rmb() after the call to 
> stmmac_rx_status() ?

Yes. I removed the debug print added the barrier, but that did not help.

Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-25 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/25/2019, 14:20:07 (UTC+00:00)

> 
> On 03/07/2019 11:37, Jose Abreu wrote:
> > Mapping and unmapping DMA region is an high bottleneck in stmmac driver,
> > specially in the RX path.
> > 
> > This commit introduces support for Page Pool API and uses it in all RX
> > queues. With this change, we get more stable troughput and some increase
> > of banwidth with iperf:
> > - MAC1000 - 950 Mbps
> > - XGMAC: 9.22 Gbps
> > 
> > Signed-off-by: Jose Abreu 
> > Cc: Joao Pinto 
> > Cc: David S. Miller 
> > Cc: Giuseppe Cavallaro 
> > Cc: Alexandre Torgue 
> > Cc: Maxime Coquelin 
> > Cc: Maxime Ripard 
> > Cc: Chen-Yu Tsai 
> > ---
> >  drivers/net/ethernet/stmicro/stmmac/Kconfig   |   1 +
> >  drivers/net/ethernet/stmicro/stmmac/stmmac.h  |  10 +-
> >  drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 196 
> > ++
> >  3 files changed, 63 insertions(+), 144 deletions(-)
> 
> ...
> 
> > @@ -3306,49 +3295,22 @@ static inline void stmmac_rx_refill(struct 
> > stmmac_priv *priv, u32 queue)
> > else
> > p = rx_q->dma_rx + entry;
> >  
> > -   if (likely(!rx_q->rx_skbuff[entry])) {
> > -   struct sk_buff *skb;
> > -
> > -   skb = netdev_alloc_skb_ip_align(priv->dev, bfsize);
> > -   if (unlikely(!skb)) {
> > -   /* so for a while no zero-copy! */
> > -   rx_q->rx_zeroc_thresh = STMMAC_RX_THRESH;
> > -   if (unlikely(net_ratelimit()))
> > -   dev_err(priv->device,
> > -   "fail to alloc skb entry %d\n",
> > -   entry);
> > -   break;
> > -   }
> > -
> > -   rx_q->rx_skbuff[entry] = skb;
> > -   rx_q->rx_skbuff_dma[entry] =
> > -   dma_map_single(priv->device, skb->data, bfsize,
> > -  DMA_FROM_DEVICE);
> > -   if (dma_mapping_error(priv->device,
> > - rx_q->rx_skbuff_dma[entry])) {
> > -   netdev_err(priv->dev, "Rx DMA map failed\n");
> > -   dev_kfree_skb(skb);
> > +   if (!buf->page) {
> > +   buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
> > +   if (!buf->page)
> > break;
> > -   }
> > -
> > -   stmmac_set_desc_addr(priv, p, 
> > rx_q->rx_skbuff_dma[entry]);
> > -   stmmac_refill_desc3(priv, rx_q, p);
> > -
> > -   if (rx_q->rx_zeroc_thresh > 0)
> > -   rx_q->rx_zeroc_thresh--;
> > -
> > -   netif_dbg(priv, rx_status, priv->dev,
> > - "refill entry #%d\n", entry);
> > }
> > -   dma_wmb();
> > +
> > +   buf->addr = buf->page->dma_addr;
> > +   stmmac_set_desc_addr(priv, p, buf->addr);
> > +   stmmac_refill_desc3(priv, rx_q, p);
> >  
> > rx_q->rx_count_frames++;
> > rx_q->rx_count_frames %= priv->rx_coal_frames;
> > use_rx_wd = priv->use_riwt && rx_q->rx_count_frames;
> >  
> > -   stmmac_set_rx_owner(priv, p, use_rx_wd);
> > -
> > dma_wmb();
> > +   stmmac_set_rx_owner(priv, p, use_rx_wd);
> >  
> > entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
> > }
> 
> I was looking at this change in a bit closer detail and one thing that
> stuck out to me was the above where the barrier had been moved from
> after the stmmac_set_rx_owner() call to before. 
> 
> So I moved this back and I no longer saw the crash. However, then I
> recalled I had the patch to enable the debug prints for the buffer
> address applied but after reverting that, the crash occurred again. 
> 
> In other words, what works for me is moving the above barrier and adding
> the debug print, which appears to suggest that there is some
> timing/coherency issue here. Anyway, maybe this is clue to what is going
> on?
> 
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c 
> b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index a7486c2f3221..2f016397231b 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -3303,8 +3303,8 @@ static inline void stmmac_rx_refill(struct stmmac_priv 
> *priv, u32 queue)
> rx_q->rx_count_frames %= priv->rx_coal_frames;
> use_rx_wd = priv->use_riwt && rx_q->rx_count_frames;
>  
> -   dma_wmb();
> stmmac_set_rx_owner(priv, p, use_rx_wd);
> +   dma_wmb();
>  
> entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
> }
> @@ -3438,6 +3438,10 @@ static int stmmac_rx(struct 

Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-25 Thread Jon Hunter


On 03/07/2019 11:37, Jose Abreu wrote:
> Mapping and unmapping DMA region is an high bottleneck in stmmac driver,
> specially in the RX path.
> 
> This commit introduces support for Page Pool API and uses it in all RX
> queues. With this change, we get more stable troughput and some increase
> of banwidth with iperf:
>   - MAC1000 - 950 Mbps
>   - XGMAC: 9.22 Gbps
> 
> Signed-off-by: Jose Abreu 
> Cc: Joao Pinto 
> Cc: David S. Miller 
> Cc: Giuseppe Cavallaro 
> Cc: Alexandre Torgue 
> Cc: Maxime Coquelin 
> Cc: Maxime Ripard 
> Cc: Chen-Yu Tsai 
> ---
>  drivers/net/ethernet/stmicro/stmmac/Kconfig   |   1 +
>  drivers/net/ethernet/stmicro/stmmac/stmmac.h  |  10 +-
>  drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 196 
> ++
>  3 files changed, 63 insertions(+), 144 deletions(-)

...

> @@ -3306,49 +3295,22 @@ static inline void stmmac_rx_refill(struct 
> stmmac_priv *priv, u32 queue)
>   else
>   p = rx_q->dma_rx + entry;
>  
> - if (likely(!rx_q->rx_skbuff[entry])) {
> - struct sk_buff *skb;
> -
> - skb = netdev_alloc_skb_ip_align(priv->dev, bfsize);
> - if (unlikely(!skb)) {
> - /* so for a while no zero-copy! */
> - rx_q->rx_zeroc_thresh = STMMAC_RX_THRESH;
> - if (unlikely(net_ratelimit()))
> - dev_err(priv->device,
> - "fail to alloc skb entry %d\n",
> - entry);
> - break;
> - }
> -
> - rx_q->rx_skbuff[entry] = skb;
> - rx_q->rx_skbuff_dma[entry] =
> - dma_map_single(priv->device, skb->data, bfsize,
> -DMA_FROM_DEVICE);
> - if (dma_mapping_error(priv->device,
> -   rx_q->rx_skbuff_dma[entry])) {
> - netdev_err(priv->dev, "Rx DMA map failed\n");
> - dev_kfree_skb(skb);
> + if (!buf->page) {
> + buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
> + if (!buf->page)
>   break;
> - }
> -
> - stmmac_set_desc_addr(priv, p, 
> rx_q->rx_skbuff_dma[entry]);
> - stmmac_refill_desc3(priv, rx_q, p);
> -
> - if (rx_q->rx_zeroc_thresh > 0)
> - rx_q->rx_zeroc_thresh--;
> -
> - netif_dbg(priv, rx_status, priv->dev,
> -   "refill entry #%d\n", entry);
>   }
> - dma_wmb();
> +
> + buf->addr = buf->page->dma_addr;
> + stmmac_set_desc_addr(priv, p, buf->addr);
> + stmmac_refill_desc3(priv, rx_q, p);
>  
>   rx_q->rx_count_frames++;
>   rx_q->rx_count_frames %= priv->rx_coal_frames;
>   use_rx_wd = priv->use_riwt && rx_q->rx_count_frames;
>  
> - stmmac_set_rx_owner(priv, p, use_rx_wd);
> -
>   dma_wmb();
> + stmmac_set_rx_owner(priv, p, use_rx_wd);
>  
>   entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
>   }

I was looking at this change in a bit closer detail and one thing that
stuck out to me was the above where the barrier had been moved from
after the stmmac_set_rx_owner() call to before. 

So I moved this back and I no longer saw the crash. However, then I
recalled I had the patch to enable the debug prints for the buffer
address applied but after reverting that, the crash occurred again. 

In other words, what works for me is moving the above barrier and adding
the debug print, which appears to suggest that there is some
timing/coherency issue here. Anyway, maybe this is clue to what is going
on?

diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c 
b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index a7486c2f3221..2f016397231b 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -3303,8 +3303,8 @@ static inline void stmmac_rx_refill(struct stmmac_priv 
*priv, u32 queue)
rx_q->rx_count_frames %= priv->rx_coal_frames;
use_rx_wd = priv->use_riwt && rx_q->rx_count_frames;
 
-   dma_wmb();
stmmac_set_rx_owner(priv, p, use_rx_wd);
+   dma_wmb();
 
entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
}
@@ -3438,6 +3438,10 @@ static int stmmac_rx(struct stmmac_priv *priv, int 
limit, u32 queue)
dma_sync_single_for_device(priv->device, buf->addr,
   frame_len, DMA_FROM_DEVICE);
 
+

Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-25 Thread Ilias Apalodimas
Hi Jon, Jose,
On Thu, Jul 25, 2019 at 10:45:46AM +0100, Jon Hunter wrote:
> 
> On 25/07/2019 08:44, Jose Abreu wrote:
> 
> ...
> 
> > OK. Can you please test what Ilias mentioned ?
> > 
> > Basically you can hard-code the order to 0 in 
> > alloc_dma_rx_desc_resources():
> > - pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
> > + pp_params.order = 0;
> > 
> > Unless you use a MTU > PAGE_SIZE.
> 
> I made the change but unfortunately the issue persists.

Yea tbh i didn't expect this to fix it, since i think the mappings are fine, but
it never hurts to verify.
@Jose: Can we add some debugging prints on the driver?
Ideally the pages the api allocates (on init), the page that the driver is
trying to use before the crash and the size of the packet (right from the device
descriptor). Maybe this will tell us where the erroneous access is

Thanks
/Ilias


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-25 Thread Jon Hunter


On 25/07/2019 08:44, Jose Abreu wrote:

...

> OK. Can you please test what Ilias mentioned ?
> 
> Basically you can hard-code the order to 0 in 
> alloc_dma_rx_desc_resources():
> - pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
> + pp_params.order = 0;
> 
> Unless you use a MTU > PAGE_SIZE.

I made the change but unfortunately the issue persists.

Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-25 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/24/2019, 12:58:15 (UTC+00:00)

> 
> On 24/07/2019 12:34, Jose Abreu wrote:
> > From: Jon Hunter 
> > Date: Jul/24/2019, 12:10:47 (UTC+00:00)
> > 
> >>
> >> On 24/07/2019 11:04, Jose Abreu wrote:
> >>
> >> ...
> >>
> >>> Jon, I was able to replicate (at some level) your setup:
> >>>
> >>> # dmesg | grep -i arm-smmu
> >>> [1.337322] arm-smmu 7004.iommu: probing hardware 
> >>> configuration...
> >>> [1.337330] arm-smmu 7004.iommu: SMMUv2 with:
> >>> [1.337338] arm-smmu 7004.iommu: stage 1 translation
> >>> [1.337346] arm-smmu 7004.iommu: stage 2 translation
> >>> [1.337354] arm-smmu 7004.iommu: nested translation
> >>> [1.337363] arm-smmu 7004.iommu: stream matching with 128 
> >>> register groups
> >>> [1.337374] arm-smmu 7004.iommu: 1 context banks (0 
> >>> stage-2 only)
> >>> [1.337383] arm-smmu 7004.iommu: Supported page sizes: 
> >>> 0x61311000
> >>> [1.337393] arm-smmu 7004.iommu: Stage-1: 48-bit VA -> 
> >>> 48-bit IPA
> >>> [1.337402] arm-smmu 7004.iommu: Stage-2: 48-bit IPA -> 
> >>> 48-bit PA
> >>>
> >>> # dmesg | grep -i stmmac
> >>> [1.344106] stmmaceth 7000.ethernet: Adding to iommu group 0
> >>> [1.344233] stmmaceth 7000.ethernet: no reset control found
> >>> [1.348276] stmmaceth 7000.ethernet: User ID: 0x10, Synopsys ID: 
> >>> 0x51
> >>> [1.348285] stmmaceth 7000.ethernet: DWMAC4/5
> >>> [1.348293] stmmaceth 7000.ethernet: DMA HW capability register 
> >>> supported
> >>> [1.348302] stmmaceth 7000.ethernet: RX Checksum Offload Engine 
> >>> supported
> >>> [1.348311] stmmaceth 7000.ethernet: TX Checksum insertion 
> >>> supported
> >>> [1.348320] stmmaceth 7000.ethernet: TSO supported
> >>> [1.348328] stmmaceth 7000.ethernet: Enable RX Mitigation via HW 
> >>> Watchdog Timer
> >>> [1.348337] stmmaceth 7000.ethernet: TSO feature enabled
> >>> [1.348409] libphy: stmmac: probed
> >>> [ 4159.140990] stmmaceth 7000.ethernet eth0: PHY [stmmac-0:01] 
> >>> driver [Generic PHY]
> >>> [ 4159.141005] stmmaceth 7000.ethernet eth0: phy: setting supported 
> >>> 00,,62ff advertising 00,,62ff
> >>> [ 4159.142359] stmmaceth 7000.ethernet eth0: No Safety Features 
> >>> support found
> >>> [ 4159.142369] stmmaceth 7000.ethernet eth0: IEEE 1588-2008 Advanced 
> >>> Timestamp supported
> >>> [ 4159.142429] stmmaceth 7000.ethernet eth0: registered PTP clock
> >>> [ 4159.142439] stmmaceth 7000.ethernet eth0: configuring for 
> >>> phy/gmii link mode
> >>> [ 4159.142452] stmmaceth 7000.ethernet eth0: phylink_mac_config: 
> >>> mode=phy/gmii/Unknown/Unknown adv=00,,62ff pause=10 link=0 
> >>> an=1
> >>> [ 4159.142466] stmmaceth 7000.ethernet eth0: phy link up 
> >>> gmii/1Gbps/Full
> >>> [ 4159.142475] stmmaceth 7000.ethernet eth0: phylink_mac_config: 
> >>> mode=phy/gmii/1Gbps/Full adv=00,, pause=0f link=1 an=0
> >>> [ 4159.142481] stmmaceth 7000.ethernet eth0: Link is Up - 1Gbps/Full 
> >>> - flow control rx/tx
> >>>
> >>> The only missing point is the NFS boot that I can't replicate with this 
> >>> setup. But I did some sanity checks:
> >>>
> >>> Remote Enpoint:
> >>> # dd if=/dev/urandom of=output.dat bs=128M count=1
> >>> # nc -c 192.168.0.2 1234 < output.dat
> >>> # md5sum output.dat 
> >>> fde9e0818281836e4fc0edfede2b8762  output.dat
> >>>
> >>> DUT:
> >>> # nc -l -c -p 1234 > output.dat
> >>> # md5sum output.dat 
> >>> fde9e0818281836e4fc0edfede2b8762  output.dat
> >>
> >> On my setup, if I do not use NFS to mount the rootfs, but then manually
> >> mount the NFS share after booting, I do not see any problems reading or
> >> writing to files on the share. So I am not sure if it is some sort of
> >> race that is occurring when mounting the NFS share on boot. It is 100%
> >> reproducible when using NFS for the root file-system.
> > 
> > I don't understand how can there be corruption then unless the IP AXI 
> > parameters are misconfigured which can lead to sporadic undefined 
> > behavior.
> > 
> > These prints from your logs:
> > [   14.579392] Run /init as init process
> > /init: line 58: chmod: command not found
> > [ 10:22:46 ] L4T-INITRD Build DATE: Mon Jul 22 10:22:46 UTC 2019
> > [ 10:22:46 ] Root device found: nfs
> > [ 10:22:46 ] Ethernet interfaces: eth0
> > [ 10:22:46 ] IP Address: 10.21.140.41
> > 
> > Where are they coming from ? Do you have any extra init script ?
> 
> By default there is an initial ramdisk that is loaded first and then the
> rootfs is mounted over NFS. However, even if I remove this ramdisk and
> directly mount the rootfs via NFS without it the problem persists. So I
> don't see any issue with the ramdisk and whats more is we have been
> using this for a long long time. Nothing has changed here.

OK. Can you please test what 

Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-24 Thread Jon Hunter


On 24/07/2019 12:34, Jose Abreu wrote:
> From: Jon Hunter 
> Date: Jul/24/2019, 12:10:47 (UTC+00:00)
> 
>>
>> On 24/07/2019 11:04, Jose Abreu wrote:
>>
>> ...
>>
>>> Jon, I was able to replicate (at some level) your setup:
>>>
>>> # dmesg | grep -i arm-smmu
>>> [1.337322] arm-smmu 7004.iommu: probing hardware 
>>> configuration...
>>> [1.337330] arm-smmu 7004.iommu: SMMUv2 with:
>>> [1.337338] arm-smmu 7004.iommu: stage 1 translation
>>> [1.337346] arm-smmu 7004.iommu: stage 2 translation
>>> [1.337354] arm-smmu 7004.iommu: nested translation
>>> [1.337363] arm-smmu 7004.iommu: stream matching with 128 
>>> register groups
>>> [1.337374] arm-smmu 7004.iommu: 1 context banks (0 
>>> stage-2 only)
>>> [1.337383] arm-smmu 7004.iommu: Supported page sizes: 
>>> 0x61311000
>>> [1.337393] arm-smmu 7004.iommu: Stage-1: 48-bit VA -> 
>>> 48-bit IPA
>>> [1.337402] arm-smmu 7004.iommu: Stage-2: 48-bit IPA -> 
>>> 48-bit PA
>>>
>>> # dmesg | grep -i stmmac
>>> [1.344106] stmmaceth 7000.ethernet: Adding to iommu group 0
>>> [1.344233] stmmaceth 7000.ethernet: no reset control found
>>> [1.348276] stmmaceth 7000.ethernet: User ID: 0x10, Synopsys ID: 
>>> 0x51
>>> [1.348285] stmmaceth 7000.ethernet: DWMAC4/5
>>> [1.348293] stmmaceth 7000.ethernet: DMA HW capability register 
>>> supported
>>> [1.348302] stmmaceth 7000.ethernet: RX Checksum Offload Engine 
>>> supported
>>> [1.348311] stmmaceth 7000.ethernet: TX Checksum insertion 
>>> supported
>>> [1.348320] stmmaceth 7000.ethernet: TSO supported
>>> [1.348328] stmmaceth 7000.ethernet: Enable RX Mitigation via HW 
>>> Watchdog Timer
>>> [1.348337] stmmaceth 7000.ethernet: TSO feature enabled
>>> [1.348409] libphy: stmmac: probed
>>> [ 4159.140990] stmmaceth 7000.ethernet eth0: PHY [stmmac-0:01] 
>>> driver [Generic PHY]
>>> [ 4159.141005] stmmaceth 7000.ethernet eth0: phy: setting supported 
>>> 00,,62ff advertising 00,,62ff
>>> [ 4159.142359] stmmaceth 7000.ethernet eth0: No Safety Features 
>>> support found
>>> [ 4159.142369] stmmaceth 7000.ethernet eth0: IEEE 1588-2008 Advanced 
>>> Timestamp supported
>>> [ 4159.142429] stmmaceth 7000.ethernet eth0: registered PTP clock
>>> [ 4159.142439] stmmaceth 7000.ethernet eth0: configuring for 
>>> phy/gmii link mode
>>> [ 4159.142452] stmmaceth 7000.ethernet eth0: phylink_mac_config: 
>>> mode=phy/gmii/Unknown/Unknown adv=00,,62ff pause=10 link=0 
>>> an=1
>>> [ 4159.142466] stmmaceth 7000.ethernet eth0: phy link up 
>>> gmii/1Gbps/Full
>>> [ 4159.142475] stmmaceth 7000.ethernet eth0: phylink_mac_config: 
>>> mode=phy/gmii/1Gbps/Full adv=00,, pause=0f link=1 an=0
>>> [ 4159.142481] stmmaceth 7000.ethernet eth0: Link is Up - 1Gbps/Full 
>>> - flow control rx/tx
>>>
>>> The only missing point is the NFS boot that I can't replicate with this 
>>> setup. But I did some sanity checks:
>>>
>>> Remote Enpoint:
>>> # dd if=/dev/urandom of=output.dat bs=128M count=1
>>> # nc -c 192.168.0.2 1234 < output.dat
>>> # md5sum output.dat 
>>> fde9e0818281836e4fc0edfede2b8762  output.dat
>>>
>>> DUT:
>>> # nc -l -c -p 1234 > output.dat
>>> # md5sum output.dat 
>>> fde9e0818281836e4fc0edfede2b8762  output.dat
>>
>> On my setup, if I do not use NFS to mount the rootfs, but then manually
>> mount the NFS share after booting, I do not see any problems reading or
>> writing to files on the share. So I am not sure if it is some sort of
>> race that is occurring when mounting the NFS share on boot. It is 100%
>> reproducible when using NFS for the root file-system.
> 
> I don't understand how can there be corruption then unless the IP AXI 
> parameters are misconfigured which can lead to sporadic undefined 
> behavior.
> 
> These prints from your logs:
> [   14.579392] Run /init as init process
> /init: line 58: chmod: command not found
> [ 10:22:46 ] L4T-INITRD Build DATE: Mon Jul 22 10:22:46 UTC 2019
> [ 10:22:46 ] Root device found: nfs
> [ 10:22:46 ] Ethernet interfaces: eth0
> [ 10:22:46 ] IP Address: 10.21.140.41
> 
> Where are they coming from ? Do you have any extra init script ?

By default there is an initial ramdisk that is loaded first and then the
rootfs is mounted over NFS. However, even if I remove this ramdisk and
directly mount the rootfs via NFS without it the problem persists. So I
don't see any issue with the ramdisk and whats more is we have been
using this for a long long time. Nothing has changed here.

Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-24 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/24/2019, 12:10:47 (UTC+00:00)

> 
> On 24/07/2019 11:04, Jose Abreu wrote:
> 
> ...
> 
> > Jon, I was able to replicate (at some level) your setup:
> > 
> > # dmesg | grep -i arm-smmu
> > [1.337322] arm-smmu 7004.iommu: probing hardware 
> > configuration...
> > [1.337330] arm-smmu 7004.iommu: SMMUv2 with:
> > [1.337338] arm-smmu 7004.iommu: stage 1 translation
> > [1.337346] arm-smmu 7004.iommu: stage 2 translation
> > [1.337354] arm-smmu 7004.iommu: nested translation
> > [1.337363] arm-smmu 7004.iommu: stream matching with 128 
> > register groups
> > [1.337374] arm-smmu 7004.iommu: 1 context banks (0 
> > stage-2 only)
> > [1.337383] arm-smmu 7004.iommu: Supported page sizes: 
> > 0x61311000
> > [1.337393] arm-smmu 7004.iommu: Stage-1: 48-bit VA -> 
> > 48-bit IPA
> > [1.337402] arm-smmu 7004.iommu: Stage-2: 48-bit IPA -> 
> > 48-bit PA
> > 
> > # dmesg | grep -i stmmac
> > [1.344106] stmmaceth 7000.ethernet: Adding to iommu group 0
> > [1.344233] stmmaceth 7000.ethernet: no reset control found
> > [1.348276] stmmaceth 7000.ethernet: User ID: 0x10, Synopsys ID: 
> > 0x51
> > [1.348285] stmmaceth 7000.ethernet: DWMAC4/5
> > [1.348293] stmmaceth 7000.ethernet: DMA HW capability register 
> > supported
> > [1.348302] stmmaceth 7000.ethernet: RX Checksum Offload Engine 
> > supported
> > [1.348311] stmmaceth 7000.ethernet: TX Checksum insertion 
> > supported
> > [1.348320] stmmaceth 7000.ethernet: TSO supported
> > [1.348328] stmmaceth 7000.ethernet: Enable RX Mitigation via HW 
> > Watchdog Timer
> > [1.348337] stmmaceth 7000.ethernet: TSO feature enabled
> > [1.348409] libphy: stmmac: probed
> > [ 4159.140990] stmmaceth 7000.ethernet eth0: PHY [stmmac-0:01] 
> > driver [Generic PHY]
> > [ 4159.141005] stmmaceth 7000.ethernet eth0: phy: setting supported 
> > 00,,62ff advertising 00,,62ff
> > [ 4159.142359] stmmaceth 7000.ethernet eth0: No Safety Features 
> > support found
> > [ 4159.142369] stmmaceth 7000.ethernet eth0: IEEE 1588-2008 Advanced 
> > Timestamp supported
> > [ 4159.142429] stmmaceth 7000.ethernet eth0: registered PTP clock
> > [ 4159.142439] stmmaceth 7000.ethernet eth0: configuring for 
> > phy/gmii link mode
> > [ 4159.142452] stmmaceth 7000.ethernet eth0: phylink_mac_config: 
> > mode=phy/gmii/Unknown/Unknown adv=00,,62ff pause=10 link=0 
> > an=1
> > [ 4159.142466] stmmaceth 7000.ethernet eth0: phy link up 
> > gmii/1Gbps/Full
> > [ 4159.142475] stmmaceth 7000.ethernet eth0: phylink_mac_config: 
> > mode=phy/gmii/1Gbps/Full adv=00,, pause=0f link=1 an=0
> > [ 4159.142481] stmmaceth 7000.ethernet eth0: Link is Up - 1Gbps/Full 
> > - flow control rx/tx
> > 
> > The only missing point is the NFS boot that I can't replicate with this 
> > setup. But I did some sanity checks:
> > 
> > Remote Enpoint:
> > # dd if=/dev/urandom of=output.dat bs=128M count=1
> > # nc -c 192.168.0.2 1234 < output.dat
> > # md5sum output.dat 
> > fde9e0818281836e4fc0edfede2b8762  output.dat
> > 
> > DUT:
> > # nc -l -c -p 1234 > output.dat
> > # md5sum output.dat 
> > fde9e0818281836e4fc0edfede2b8762  output.dat
> 
> On my setup, if I do not use NFS to mount the rootfs, but then manually
> mount the NFS share after booting, I do not see any problems reading or
> writing to files on the share. So I am not sure if it is some sort of
> race that is occurring when mounting the NFS share on boot. It is 100%
> reproducible when using NFS for the root file-system.

I don't understand how can there be corruption then unless the IP AXI 
parameters are misconfigured which can lead to sporadic undefined 
behavior.

These prints from your logs:
[   14.579392] Run /init as init process
/init: line 58: chmod: command not found
[ 10:22:46 ] L4T-INITRD Build DATE: Mon Jul 22 10:22:46 UTC 2019
[ 10:22:46 ] Root device found: nfs
[ 10:22:46 ] Ethernet interfaces: eth0
[ 10:22:46 ] IP Address: 10.21.140.41

Where are they coming from ? Do you have any extra init script ?

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-24 Thread Jon Hunter


On 24/07/2019 11:04, Jose Abreu wrote:

...

> Jon, I was able to replicate (at some level) your setup:
> 
> # dmesg | grep -i arm-smmu
> [1.337322] arm-smmu 7004.iommu: probing hardware 
> configuration...
> [1.337330] arm-smmu 7004.iommu: SMMUv2 with:
> [1.337338] arm-smmu 7004.iommu: stage 1 translation
> [1.337346] arm-smmu 7004.iommu: stage 2 translation
> [1.337354] arm-smmu 7004.iommu: nested translation
> [1.337363] arm-smmu 7004.iommu: stream matching with 128 
> register groups
> [1.337374] arm-smmu 7004.iommu: 1 context banks (0 
> stage-2 only)
> [1.337383] arm-smmu 7004.iommu: Supported page sizes: 
> 0x61311000
> [1.337393] arm-smmu 7004.iommu: Stage-1: 48-bit VA -> 
> 48-bit IPA
> [1.337402] arm-smmu 7004.iommu: Stage-2: 48-bit IPA -> 
> 48-bit PA
> 
> # dmesg | grep -i stmmac
> [1.344106] stmmaceth 7000.ethernet: Adding to iommu group 0
> [1.344233] stmmaceth 7000.ethernet: no reset control found
> [1.348276] stmmaceth 7000.ethernet: User ID: 0x10, Synopsys ID: 
> 0x51
> [1.348285] stmmaceth 7000.ethernet: DWMAC4/5
> [1.348293] stmmaceth 7000.ethernet: DMA HW capability register 
> supported
> [1.348302] stmmaceth 7000.ethernet: RX Checksum Offload Engine 
> supported
> [1.348311] stmmaceth 7000.ethernet: TX Checksum insertion 
> supported
> [1.348320] stmmaceth 7000.ethernet: TSO supported
> [1.348328] stmmaceth 7000.ethernet: Enable RX Mitigation via HW 
> Watchdog Timer
> [1.348337] stmmaceth 7000.ethernet: TSO feature enabled
> [1.348409] libphy: stmmac: probed
> [ 4159.140990] stmmaceth 7000.ethernet eth0: PHY [stmmac-0:01] 
> driver [Generic PHY]
> [ 4159.141005] stmmaceth 7000.ethernet eth0: phy: setting supported 
> 00,,62ff advertising 00,,62ff
> [ 4159.142359] stmmaceth 7000.ethernet eth0: No Safety Features 
> support found
> [ 4159.142369] stmmaceth 7000.ethernet eth0: IEEE 1588-2008 Advanced 
> Timestamp supported
> [ 4159.142429] stmmaceth 7000.ethernet eth0: registered PTP clock
> [ 4159.142439] stmmaceth 7000.ethernet eth0: configuring for 
> phy/gmii link mode
> [ 4159.142452] stmmaceth 7000.ethernet eth0: phylink_mac_config: 
> mode=phy/gmii/Unknown/Unknown adv=00,,62ff pause=10 link=0 
> an=1
> [ 4159.142466] stmmaceth 7000.ethernet eth0: phy link up 
> gmii/1Gbps/Full
> [ 4159.142475] stmmaceth 7000.ethernet eth0: phylink_mac_config: 
> mode=phy/gmii/1Gbps/Full adv=00,, pause=0f link=1 an=0
> [ 4159.142481] stmmaceth 7000.ethernet eth0: Link is Up - 1Gbps/Full 
> - flow control rx/tx
> 
> The only missing point is the NFS boot that I can't replicate with this 
> setup. But I did some sanity checks:
> 
> Remote Enpoint:
> # dd if=/dev/urandom of=output.dat bs=128M count=1
> # nc -c 192.168.0.2 1234 < output.dat
> # md5sum output.dat 
> fde9e0818281836e4fc0edfede2b8762  output.dat
> 
> DUT:
> # nc -l -c -p 1234 > output.dat
> # md5sum output.dat 
> fde9e0818281836e4fc0edfede2b8762  output.dat

On my setup, if I do not use NFS to mount the rootfs, but then manually
mount the NFS share after booting, I do not see any problems reading or
writing to files on the share. So I am not sure if it is some sort of
race that is occurring when mounting the NFS share on boot. It is 100%
reproducible when using NFS for the root file-system.

I am using the Jetson TX2 devkit [0] to test this.

Cheers
Jon

[0] https://developer.nvidia.com/embedded/jetson-tx2-developer-kit

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-24 Thread Jose Abreu
From: Ilias Apalodimas 
Date: Jul/24/2019, 10:53:10 (UTC+00:00)

> Jose, 
> > From: Ilias Apalodimas 
> > Date: Jul/24/2019, 09:54:27 (UTC+00:00)
> > 
> > > Hi David, 
> > > 
> > > > From: Jon Hunter 
> > > > Date: Tue, 23 Jul 2019 13:09:00 +0100
> > > > 
> > > > > Setting "iommu.passthrough=1" works for me. However, I am not sure 
> > > > > where
> > > > > to go from here, so any ideas you have would be great.
> > > > 
> > > > Then definitely we are accessing outside of a valid IOMMU mapping due
> > > > to the page pool support changes.
> > > 
> > > Yes. On the netsec driver i did test with and without SMMU to make sure i 
> > > am not
> > > breaking anything.
> > > Since we map the whole page on the API i think some offset on the driver 
> > > causes
> > > that. In any case i'll have another look on page_pool to make sure we are 
> > > not
> > > missing anything. 
> > 
> > Ilias, can it be due to this:
> > 
> > stmmac_main.c:
> > pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
> > 
> > page_pool.c:
> > dma = dma_map_page_attrs(pool->p.dev, page, 0,
> >  (PAGE_SIZE << pool->p.order),
> >  pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
> > 
> > "order", will be at least 1 and then mapping the page can cause overlap 
> > ?
> 
> well the API is calling the map with the correct page, page offset (0) and 
> size
> right? I don't see any overlapping here. Aren't we mapping what we allocate?
> 
> Why do you need higher order pages? Jumbo frames? Can we do a quick test with
> the order being 0?

Yes, it's for Jumbo frames that can be as large as 16k.

>From Jon logs it can be seen that buffers are 8k but frames are 1500 max 
so it is using order = 1.

Jon, I was able to replicate (at some level) your setup:

# dmesg | grep -i arm-smmu
[1.337322] arm-smmu 7004.iommu: probing hardware 
configuration...
[1.337330] arm-smmu 7004.iommu: SMMUv2 with:
[1.337338] arm-smmu 7004.iommu: stage 1 translation
[1.337346] arm-smmu 7004.iommu: stage 2 translation
[1.337354] arm-smmu 7004.iommu: nested translation
[1.337363] arm-smmu 7004.iommu: stream matching with 128 
register groups
[1.337374] arm-smmu 7004.iommu: 1 context banks (0 
stage-2 only)
[1.337383] arm-smmu 7004.iommu: Supported page sizes: 
0x61311000
[1.337393] arm-smmu 7004.iommu: Stage-1: 48-bit VA -> 
48-bit IPA
[1.337402] arm-smmu 7004.iommu: Stage-2: 48-bit IPA -> 
48-bit PA

# dmesg | grep -i stmmac
[1.344106] stmmaceth 7000.ethernet: Adding to iommu group 0
[1.344233] stmmaceth 7000.ethernet: no reset control found
[1.348276] stmmaceth 7000.ethernet: User ID: 0x10, Synopsys ID: 
0x51
[1.348285] stmmaceth 7000.ethernet: DWMAC4/5
[1.348293] stmmaceth 7000.ethernet: DMA HW capability register 
supported
[1.348302] stmmaceth 7000.ethernet: RX Checksum Offload Engine 
supported
[1.348311] stmmaceth 7000.ethernet: TX Checksum insertion 
supported
[1.348320] stmmaceth 7000.ethernet: TSO supported
[1.348328] stmmaceth 7000.ethernet: Enable RX Mitigation via HW 
Watchdog Timer
[1.348337] stmmaceth 7000.ethernet: TSO feature enabled
[1.348409] libphy: stmmac: probed
[ 4159.140990] stmmaceth 7000.ethernet eth0: PHY [stmmac-0:01] 
driver [Generic PHY]
[ 4159.141005] stmmaceth 7000.ethernet eth0: phy: setting supported 
00,,62ff advertising 00,,62ff
[ 4159.142359] stmmaceth 7000.ethernet eth0: No Safety Features 
support found
[ 4159.142369] stmmaceth 7000.ethernet eth0: IEEE 1588-2008 Advanced 
Timestamp supported
[ 4159.142429] stmmaceth 7000.ethernet eth0: registered PTP clock
[ 4159.142439] stmmaceth 7000.ethernet eth0: configuring for 
phy/gmii link mode
[ 4159.142452] stmmaceth 7000.ethernet eth0: phylink_mac_config: 
mode=phy/gmii/Unknown/Unknown adv=00,,62ff pause=10 link=0 
an=1
[ 4159.142466] stmmaceth 7000.ethernet eth0: phy link up 
gmii/1Gbps/Full
[ 4159.142475] stmmaceth 7000.ethernet eth0: phylink_mac_config: 
mode=phy/gmii/1Gbps/Full adv=00,, pause=0f link=1 an=0
[ 4159.142481] stmmaceth 7000.ethernet eth0: Link is Up - 1Gbps/Full 
- flow control rx/tx

The only missing point is the NFS boot that I can't replicate with this 
setup. But I did some sanity checks:

Remote Enpoint:
# dd if=/dev/urandom of=output.dat bs=128M count=1
# nc -c 192.168.0.2 1234 < output.dat
# md5sum output.dat 
fde9e0818281836e4fc0edfede2b8762  output.dat

DUT:
# nc -l -c -p 1234 > output.dat
# md5sum output.dat 
fde9e0818281836e4fc0edfede2b8762  output.dat

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-24 Thread Robin Murphy

On 23/07/2019 22:39, Jon Hunter wrote:


On 23/07/2019 14:19, Robin Murphy wrote:

...


Do you know if the SMMU interrupts are working correctly? If not, it's
possible that an incorrect address or mapping direction could lead to
the DMA transaction just being silently terminated without any fault
indication, which generally presents as inexplicable weirdness (I've
certainly seen that on another platform with the mix of an unsupported
interrupt controller and an 'imperfect' ethernet driver).


If I simply remove the iommu node for the ethernet controller, then I
see lots of ...

[    6.296121] arm-smmu 1200.iommu: Unexpected global fault, this
could be serious
[    6.296125] arm-smmu 1200.iommu: GFSR 0x0002,
GFSYNR0 0x, GFSYNR1 0x0014, GFSYNR2 0x

So I assume that this is triggering the SMMU interrupt correctly.


According to tegra186.dtsi it appears you're using the MMU-500 combined
interrupt, so if global faults are being delivered then context faults
*should* also, but I'd be inclined to try a quick hack of the relevant
stmmac_desc_ops::set_addr callback to write some bogus unmapped address
just to make sure arm_smmu_context_fault() then screams as expected, and
we're not missing anything else.


I hacked the driver and forced the address to zero for a test and
in doing so I see ...

[   10.440072] arm-smmu 1200.iommu: Unhandled context fault: fsr=0x402, 
iova=0x, fsynr=0x1c0011, cbfrsynra=0x14, cb=0

So looks like the interrupts are working AFAICT.


OK, that's good, thanks for confirming. Unfortunately that now leaves us 
with the challenge of figuring out how things are managing to go wrong 
*without* ever faulting... :)


I wonder if we can provoke the failure on non-IOMMU platforms with 
"swiotlb=force" - I have a few boxes I could potentially test that on, 
but sadly forgot my plan to bring one with me this morning.


Robin.


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-24 Thread Ilias Apalodimas
Jose, 
> From: Ilias Apalodimas 
> Date: Jul/24/2019, 09:54:27 (UTC+00:00)
> 
> > Hi David, 
> > 
> > > From: Jon Hunter 
> > > Date: Tue, 23 Jul 2019 13:09:00 +0100
> > > 
> > > > Setting "iommu.passthrough=1" works for me. However, I am not sure where
> > > > to go from here, so any ideas you have would be great.
> > > 
> > > Then definitely we are accessing outside of a valid IOMMU mapping due
> > > to the page pool support changes.
> > 
> > Yes. On the netsec driver i did test with and without SMMU to make sure i 
> > am not
> > breaking anything.
> > Since we map the whole page on the API i think some offset on the driver 
> > causes
> > that. In any case i'll have another look on page_pool to make sure we are 
> > not
> > missing anything. 
> 
> Ilias, can it be due to this:
> 
> stmmac_main.c:
>   pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
> 
> page_pool.c:
>   dma = dma_map_page_attrs(pool->p.dev, page, 0,
>(PAGE_SIZE << pool->p.order),
>pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
> 
> "order", will be at least 1 and then mapping the page can cause overlap 
> ?

well the API is calling the map with the correct page, page offset (0) and size
right? I don't see any overlapping here. Aren't we mapping what we allocate?

Why do you need higher order pages? Jumbo frames? Can we do a quick test with
the order being 0?

Thanks,
/Ilias


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-24 Thread Jose Abreu
From: Ilias Apalodimas 
Date: Jul/24/2019, 09:54:27 (UTC+00:00)

> Hi David, 
> 
> > From: Jon Hunter 
> > Date: Tue, 23 Jul 2019 13:09:00 +0100
> > 
> > > Setting "iommu.passthrough=1" works for me. However, I am not sure where
> > > to go from here, so any ideas you have would be great.
> > 
> > Then definitely we are accessing outside of a valid IOMMU mapping due
> > to the page pool support changes.
> 
> Yes. On the netsec driver i did test with and without SMMU to make sure i am 
> not
> breaking anything.
> Since we map the whole page on the API i think some offset on the driver 
> causes
> that. In any case i'll have another look on page_pool to make sure we are not
> missing anything. 

Ilias, can it be due to this:

stmmac_main.c:
pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);

page_pool.c:
dma = dma_map_page_attrs(pool->p.dev, page, 0,
 (PAGE_SIZE << pool->p.order),
 pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);

"order", will be at least 1 and then mapping the page can cause overlap 
?

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-24 Thread Ilias Apalodimas
Hi David, 

> From: Jon Hunter 
> Date: Tue, 23 Jul 2019 13:09:00 +0100
> 
> > Setting "iommu.passthrough=1" works for me. However, I am not sure where
> > to go from here, so any ideas you have would be great.
> 
> Then definitely we are accessing outside of a valid IOMMU mapping due
> to the page pool support changes.

Yes. On the netsec driver i did test with and without SMMU to make sure i am not
breaking anything.
Since we map the whole page on the API i think some offset on the driver causes
that. In any case i'll have another look on page_pool to make sure we are not
missing anything. 

> 
> Such a problem should be spotted with swiommu enabled with debugging.

Thanks
/Ilias


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Jon Hunter


On 23/07/2019 14:19, Robin Murphy wrote:

...

>>> Do you know if the SMMU interrupts are working correctly? If not, it's
>>> possible that an incorrect address or mapping direction could lead to
>>> the DMA transaction just being silently terminated without any fault
>>> indication, which generally presents as inexplicable weirdness (I've
>>> certainly seen that on another platform with the mix of an unsupported
>>> interrupt controller and an 'imperfect' ethernet driver).
>>
>> If I simply remove the iommu node for the ethernet controller, then I
>> see lots of ...
>>
>> [    6.296121] arm-smmu 1200.iommu: Unexpected global fault, this
>> could be serious
>> [    6.296125] arm-smmu 1200.iommu: GFSR 0x0002,
>> GFSYNR0 0x, GFSYNR1 0x0014, GFSYNR2 0x
>>
>> So I assume that this is triggering the SMMU interrupt correctly.
> 
> According to tegra186.dtsi it appears you're using the MMU-500 combined
> interrupt, so if global faults are being delivered then context faults
> *should* also, but I'd be inclined to try a quick hack of the relevant
> stmmac_desc_ops::set_addr callback to write some bogus unmapped address
> just to make sure arm_smmu_context_fault() then screams as expected, and
> we're not missing anything else.

I hacked the driver and forced the address to zero for a test and
in doing so I see ...

[   10.440072] arm-smmu 1200.iommu: Unhandled context fault: fsr=0x402, 
iova=0x, fsynr=0x1c0011, cbfrsynra=0x14, cb=0

So looks like the interrupts are working AFAICT.

Cheers
Jon

-- 
nvpublic


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread David Miller
From: Jon Hunter 
Date: Tue, 23 Jul 2019 13:09:00 +0100

> Setting "iommu.passthrough=1" works for me. However, I am not sure where
> to go from here, so any ideas you have would be great.

Then definitely we are accessing outside of a valid IOMMU mapping due
to the page pool support changes.

Such a problem should be spotted with swiommu enabled with debugging.


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Jon Hunter


On 23/07/2019 13:51, Jose Abreu wrote:
> From: Jon Hunter 
> Date: Jul/23/2019, 12:58:55 (UTC+00:00)
> 
>>
>> On 23/07/2019 11:49, Jose Abreu wrote:
>>> From: Jon Hunter 
>>> Date: Jul/23/2019, 11:38:33 (UTC+00:00)
>>>

 On 23/07/2019 11:07, Jose Abreu wrote:
> From: Jon Hunter 
> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
>
>> This appears to be a winner and by disabling the SMMU for the ethernet
>> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
>> this worked! So yes appears to be related to the SMMU being enabled. We
>> had to enable the SMMU for ethernet recently due to commit
>> 954a03be033c7cef80ddc232e7cbdb17df735663.
>
> Finally :)
>
> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
>
> + There are few reasons to allow unmatched stream bypass, and
> + even fewer good ones.  If saying YES here breaks your board
> + you should work on fixing your board.
>
> So, how can we fix this ? Is your ethernet DT node marked as 
> "dma-coherent;" ?

 TBH I have no idea. I can't say I fully understand your change or how it
 is breaking things for us.

 Currently, the Tegra DT binding does not have 'dma-coherent' set. I see
 this is optional, but I am not sure how you determine whether or not
 this should be set.
>>>
>>> From my understanding it means that your device / IP DMA accesses are 
>>> coherent regarding the CPU point of view. I think it will be the case if 
>>> GMAC is not behind any kind of IOMMU in the HW arch.
>>
>> I understand what coherency is, I just don't know how you tell if this
>> implementation of the ethernet controller is coherent or not.
> 
> Do you have any detailed diagram of your HW ? Such as blocks / IPs 
> connection, address space wiring , ...

Yes, this can be found in the Tegra X2 Technical Reference Manual [0].
Unfortunately, you need to create an account to download it.

Jon

[0] https://developer.nvidia.com/embedded/dlc/parker-series-trm

-- 
nvpublic


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Robin Murphy

On 23/07/2019 13:09, Jon Hunter wrote:


On 23/07/2019 11:29, Robin Murphy wrote:

On 23/07/2019 11:07, Jose Abreu wrote:

From: Jon Hunter 
Date: Jul/23/2019, 11:01:24 (UTC+00:00)


This appears to be a winner and by disabling the SMMU for the ethernet
controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
this worked! So yes appears to be related to the SMMU being enabled. We
had to enable the SMMU for ethernet recently due to commit
954a03be033c7cef80ddc232e7cbdb17df735663.


Finally :)

However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":

+ There are few reasons to allow unmatched stream bypass, and
+ even fewer good ones.  If saying YES here breaks your board
+ you should work on fixing your board.

So, how can we fix this ? Is your ethernet DT node marked as
"dma-coherent;" ?


The first thing to try would be booting the failing setup with
"iommu.passthrough=1" (or using CONFIG_IOMMU_DEFAULT_PASSTHROUGH) - if
that makes things seem OK, then the problem is likely related to address
translation; if not, then it's probably time to start looking at nasties
like coherency and ordering, although in principle I wouldn't expect the
SMMU to have too much impact there.


Setting "iommu.passthrough=1" works for me. However, I am not sure where
to go from here, so any ideas you have would be great.


OK, so that really implies it's something to do with the addresses. From 
a quick skim of the patch, I'm wondering if it's possible for buf->addr 
and buf->page->dma_addr to get out-of-sync at any point. The nature of 
the IOVA allocator makes it quite likely that a stale DMA address will 
have been reused for a new mapping, so putting the wrong address in a 
descriptor may well mean the DMA still ends up hitting a valid 
translation, but which is now pointing to a different page.



Do you know if the SMMU interrupts are working correctly? If not, it's
possible that an incorrect address or mapping direction could lead to
the DMA transaction just being silently terminated without any fault
indication, which generally presents as inexplicable weirdness (I've
certainly seen that on another platform with the mix of an unsupported
interrupt controller and an 'imperfect' ethernet driver).


If I simply remove the iommu node for the ethernet controller, then I
see lots of ...

[6.296121] arm-smmu 1200.iommu: Unexpected global fault, this could be 
serious
[6.296125] arm-smmu 1200.iommu: GFSR 0x0002, GFSYNR0 
0x, GFSYNR1 0x0014, GFSYNR2 0x

So I assume that this is triggering the SMMU interrupt correctly.


According to tegra186.dtsi it appears you're using the MMU-500 combined 
interrupt, so if global faults are being delivered then context faults 
*should* also, but I'd be inclined to try a quick hack of the relevant 
stmmac_desc_ops::set_addr callback to write some bogus unmapped address 
just to make sure arm_smmu_context_fault() then screams as expected, and 
we're not missing anything else.


Robin.


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/23/2019, 12:58:55 (UTC+00:00)

> 
> On 23/07/2019 11:49, Jose Abreu wrote:
> > From: Jon Hunter 
> > Date: Jul/23/2019, 11:38:33 (UTC+00:00)
> > 
> >>
> >> On 23/07/2019 11:07, Jose Abreu wrote:
> >>> From: Jon Hunter 
> >>> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
> >>>
>  This appears to be a winner and by disabling the SMMU for the ethernet
>  controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
>  this worked! So yes appears to be related to the SMMU being enabled. We
>  had to enable the SMMU for ethernet recently due to commit
>  954a03be033c7cef80ddc232e7cbdb17df735663.
> >>>
> >>> Finally :)
> >>>
> >>> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
> >>>
> >>> + There are few reasons to allow unmatched stream bypass, and
> >>> + even fewer good ones.  If saying YES here breaks your board
> >>> + you should work on fixing your board.
> >>>
> >>> So, how can we fix this ? Is your ethernet DT node marked as 
> >>> "dma-coherent;" ?
> >>
> >> TBH I have no idea. I can't say I fully understand your change or how it
> >> is breaking things for us.
> >>
> >> Currently, the Tegra DT binding does not have 'dma-coherent' set. I see
> >> this is optional, but I am not sure how you determine whether or not
> >> this should be set.
> > 
> > From my understanding it means that your device / IP DMA accesses are 
> > coherent regarding the CPU point of view. I think it will be the case if 
> > GMAC is not behind any kind of IOMMU in the HW arch.
> 
> I understand what coherency is, I just don't know how you tell if this
> implementation of the ethernet controller is coherent or not.

Do you have any detailed diagram of your HW ? Such as blocks / IPs 
connection, address space wiring , ...

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Jon Hunter


On 23/07/2019 11:29, Robin Murphy wrote:
> On 23/07/2019 11:07, Jose Abreu wrote:
>> From: Jon Hunter 
>> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
>>
>>> This appears to be a winner and by disabling the SMMU for the ethernet
>>> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
>>> this worked! So yes appears to be related to the SMMU being enabled. We
>>> had to enable the SMMU for ethernet recently due to commit
>>> 954a03be033c7cef80ddc232e7cbdb17df735663.
>>
>> Finally :)
>>
>> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
>>
>> + There are few reasons to allow unmatched stream bypass, and
>> + even fewer good ones.  If saying YES here breaks your board
>> + you should work on fixing your board.
>>
>> So, how can we fix this ? Is your ethernet DT node marked as
>> "dma-coherent;" ?
> 
> The first thing to try would be booting the failing setup with
> "iommu.passthrough=1" (or using CONFIG_IOMMU_DEFAULT_PASSTHROUGH) - if
> that makes things seem OK, then the problem is likely related to address
> translation; if not, then it's probably time to start looking at nasties
> like coherency and ordering, although in principle I wouldn't expect the
> SMMU to have too much impact there.

Setting "iommu.passthrough=1" works for me. However, I am not sure where
to go from here, so any ideas you have would be great.

> Do you know if the SMMU interrupts are working correctly? If not, it's
> possible that an incorrect address or mapping direction could lead to
> the DMA transaction just being silently terminated without any fault
> indication, which generally presents as inexplicable weirdness (I've
> certainly seen that on another platform with the mix of an unsupported
> interrupt controller and an 'imperfect' ethernet driver).

If I simply remove the iommu node for the ethernet controller, then I
see lots of ...

[6.296121] arm-smmu 1200.iommu: Unexpected global fault, this could be 
serious
[6.296125] arm-smmu 1200.iommu: GFSR 0x0002, GFSYNR0 
0x, GFSYNR1 0x0014, GFSYNR2 0x

So I assume that this is triggering the SMMU interrupt correctly. 

> Just to confirm, has the original patch been tested with
> CONFIG_DMA_API_DEBUG to rule out any high-level mishaps?
Yes one of the first things we tried but did not bare any fruit.

Cheers
Jon

-- 
nvpublic


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Jon Hunter


On 23/07/2019 11:49, Jose Abreu wrote:
> From: Jon Hunter 
> Date: Jul/23/2019, 11:38:33 (UTC+00:00)
> 
>>
>> On 23/07/2019 11:07, Jose Abreu wrote:
>>> From: Jon Hunter 
>>> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
>>>
 This appears to be a winner and by disabling the SMMU for the ethernet
 controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
 this worked! So yes appears to be related to the SMMU being enabled. We
 had to enable the SMMU for ethernet recently due to commit
 954a03be033c7cef80ddc232e7cbdb17df735663.
>>>
>>> Finally :)
>>>
>>> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
>>>
>>> + There are few reasons to allow unmatched stream bypass, and
>>> + even fewer good ones.  If saying YES here breaks your board
>>> + you should work on fixing your board.
>>>
>>> So, how can we fix this ? Is your ethernet DT node marked as 
>>> "dma-coherent;" ?
>>
>> TBH I have no idea. I can't say I fully understand your change or how it
>> is breaking things for us.
>>
>> Currently, the Tegra DT binding does not have 'dma-coherent' set. I see
>> this is optional, but I am not sure how you determine whether or not
>> this should be set.
> 
> From my understanding it means that your device / IP DMA accesses are 
> coherent regarding the CPU point of view. I think it will be the case if GMAC 
> is not behind any kind of IOMMU in the HW arch.

I understand what coherency is, I just don't know how you tell if this
implementation of the ethernet controller is coherent or not.

Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Jose Abreu
From: Robin Murphy 
Date: Jul/23/2019, 11:29:28 (UTC+00:00)

> On 23/07/2019 11:07, Jose Abreu wrote:
> > From: Jon Hunter 
> > Date: Jul/23/2019, 11:01:24 (UTC+00:00)
> > 
> >> This appears to be a winner and by disabling the SMMU for the ethernet
> >> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
> >> this worked! So yes appears to be related to the SMMU being enabled. We
> >> had to enable the SMMU for ethernet recently due to commit
> >> 954a03be033c7cef80ddc232e7cbdb17df735663.
> > 
> > Finally :)
> > 
> > However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
> > 
> > + There are few reasons to allow unmatched stream bypass, and
> > + even fewer good ones.  If saying YES here breaks your board
> > + you should work on fixing your board.
> > 
> > So, how can we fix this ? Is your ethernet DT node marked as
> > "dma-coherent;" ?
> 
> The first thing to try would be booting the failing setup with 
> "iommu.passthrough=1" (or using CONFIG_IOMMU_DEFAULT_PASSTHROUGH) - if 
> that makes things seem OK, then the problem is likely related to address 
> translation; if not, then it's probably time to start looking at nasties 
> like coherency and ordering, although in principle I wouldn't expect the 
> SMMU to have too much impact there.
> 
> Do you know if the SMMU interrupts are working correctly? If not, it's 
> possible that an incorrect address or mapping direction could lead to 
> the DMA transaction just being silently terminated without any fault 
> indication, which generally presents as inexplicable weirdness (I've 
> certainly seen that on another platform with the mix of an unsupported 
> interrupt controller and an 'imperfect' ethernet driver).
> 
> Just to confirm, has the original patch been tested with 
> CONFIG_DMA_API_DEBUG to rule out any high-level mishaps?

Yes but both my setups don't have any IOMMU: One is x86 + SWIOTLB and 
another is just coherent regarding CPU.

---
Thanks,
Jose Miguel Abreu


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/23/2019, 11:38:33 (UTC+00:00)

> 
> On 23/07/2019 11:07, Jose Abreu wrote:
> > From: Jon Hunter 
> > Date: Jul/23/2019, 11:01:24 (UTC+00:00)
> > 
> >> This appears to be a winner and by disabling the SMMU for the ethernet
> >> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
> >> this worked! So yes appears to be related to the SMMU being enabled. We
> >> had to enable the SMMU for ethernet recently due to commit
> >> 954a03be033c7cef80ddc232e7cbdb17df735663.
> > 
> > Finally :)
> > 
> > However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
> > 
> > + There are few reasons to allow unmatched stream bypass, and
> > + even fewer good ones.  If saying YES here breaks your board
> > + you should work on fixing your board.
> > 
> > So, how can we fix this ? Is your ethernet DT node marked as 
> > "dma-coherent;" ?
> 
> TBH I have no idea. I can't say I fully understand your change or how it
> is breaking things for us.
> 
> Currently, the Tegra DT binding does not have 'dma-coherent' set. I see
> this is optional, but I am not sure how you determine whether or not
> this should be set.

From my understanding it means that your device / IP DMA accesses are coherent 
regarding the CPU point of view. I think it will be the case if GMAC is not 
behind any kind of IOMMU in the HW arch.

I don't know about this SMMU but the source does have some special 
conditions when device is dma-coherent.

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Jon Hunter


On 23/07/2019 11:07, Jose Abreu wrote:
> From: Jon Hunter 
> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
> 
>> This appears to be a winner and by disabling the SMMU for the ethernet
>> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
>> this worked! So yes appears to be related to the SMMU being enabled. We
>> had to enable the SMMU for ethernet recently due to commit
>> 954a03be033c7cef80ddc232e7cbdb17df735663.
> 
> Finally :)
> 
> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
> 
> + There are few reasons to allow unmatched stream bypass, and
> + even fewer good ones.  If saying YES here breaks your board
> + you should work on fixing your board.
> 
> So, how can we fix this ? Is your ethernet DT node marked as 
> "dma-coherent;" ?

TBH I have no idea. I can't say I fully understand your change or how it
is breaking things for us.

Currently, the Tegra DT binding does not have 'dma-coherent' set. I see
this is optional, but I am not sure how you determine whether or not
this should be set.

Jon

-- 
nvpublic


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Robin Murphy

On 23/07/2019 11:07, Jose Abreu wrote:

From: Jon Hunter 
Date: Jul/23/2019, 11:01:24 (UTC+00:00)


This appears to be a winner and by disabling the SMMU for the ethernet
controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
this worked! So yes appears to be related to the SMMU being enabled. We
had to enable the SMMU for ethernet recently due to commit
954a03be033c7cef80ddc232e7cbdb17df735663.


Finally :)

However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":

+ There are few reasons to allow unmatched stream bypass, and
+ even fewer good ones.  If saying YES here breaks your board
+ you should work on fixing your board.

So, how can we fix this ? Is your ethernet DT node marked as
"dma-coherent;" ?


The first thing to try would be booting the failing setup with 
"iommu.passthrough=1" (or using CONFIG_IOMMU_DEFAULT_PASSTHROUGH) - if 
that makes things seem OK, then the problem is likely related to address 
translation; if not, then it's probably time to start looking at nasties 
like coherency and ordering, although in principle I wouldn't expect the 
SMMU to have too much impact there.


Do you know if the SMMU interrupts are working correctly? If not, it's 
possible that an incorrect address or mapping direction could lead to 
the DMA transaction just being silently terminated without any fault 
indication, which generally presents as inexplicable weirdness (I've 
certainly seen that on another platform with the mix of an unsupported 
interrupt controller and an 'imperfect' ethernet driver).


Just to confirm, has the original patch been tested with 
CONFIG_DMA_API_DEBUG to rule out any high-level mishaps?


Robin.


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/23/2019, 11:01:24 (UTC+00:00)

> This appears to be a winner and by disabling the SMMU for the ethernet
> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
> this worked! So yes appears to be related to the SMMU being enabled. We
> had to enable the SMMU for ethernet recently due to commit
> 954a03be033c7cef80ddc232e7cbdb17df735663.

Finally :)

However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":

+ There are few reasons to allow unmatched stream bypass, and
+ even fewer good ones.  If saying YES here breaks your board
+ you should work on fixing your board.

So, how can we fix this ? Is your ethernet DT node marked as 
"dma-coherent;" ?

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Jon Hunter


On 23/07/2019 09:14, Jose Abreu wrote:
> From: Jose Abreu 
> Date: Jul/22/2019, 15:04:49 (UTC+00:00)
> 
>> From: Jon Hunter 
>> Date: Jul/22/2019, 13:05:38 (UTC+00:00)
>>
>>>
>>> On 22/07/2019 12:39, Jose Abreu wrote:
 From: Lars Persson 
 Date: Jul/22/2019, 12:11:50 (UTC+00:00)

> On Mon, Jul 22, 2019 at 12:18 PM Ilias Apalodimas
>  wrote:
>>
>> On Thu, Jul 18, 2019 at 07:48:04AM +, Jose Abreu wrote:
>>> From: Jon Hunter 
>>> Date: Jul/17/2019, 19:58:53 (UTC+00:00)
>>>
 Let me know if you have any thoughts.
>>>
>>> Can you try attached patch ?
>>>
>>
>> The log says  someone calls panic() right?
>> Can we trye and figure were that happens during the stmmac init phase?
>>
>
> The reason for the panic is hidden in this one line of the kernel logs:
> Kernel panic - not syncing: Attempted to kill init! exitcode=0x000b
>
> The init process is killed by SIGSEGV (signal 11 = 0xb).
>
> I would suggest you look for data corruption bugs in the RX path. If
> the code is fetched from the NFS mount then a corrupt RX buffer can
> trigger a crash in userspace.
>
> /Lars


 Jon, I'm not familiar with ARM. Are the buffer addresses being allocated 
 in a coherent region ? Can you try attached patch which adds full memory 
 barrier before the sync ?
>>>
>>> TBH I am not sure about the buffer addresses either. The attached patch
>>> did not help. Same problem persists.
>>
>> OK. I'm just guessing now at this stage but can you disable SMP ?

I tried limiting the number of CPUs to one by setting 'maxcpus=0' on the
kernel command line. However, this did not help.

>> We have to narrow down if this is coherency issue but you said that 
>> booting without NFS and then mounting manually the share works ... So, 
>> can you share logs with same debug prints in this condition in order to 
>> compare ?
> 
> Jon, I have one ARM based board and I can't face your issue but I 
> noticed that my buffer addresses are being mapped using SWIOTLB. Can you 
> disable IOMMU support on your setup and let me know if the problem 
> persists ?

This appears to be a winner and by disabling the SMMU for the ethernet
controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
this worked! So yes appears to be related to the SMMU being enabled. We
had to enable the SMMU for ethernet recently due to commit
954a03be033c7cef80ddc232e7cbdb17df735663.

Cheers
Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-23 Thread Jose Abreu
From: Jose Abreu 
Date: Jul/22/2019, 15:04:49 (UTC+00:00)

> From: Jon Hunter 
> Date: Jul/22/2019, 13:05:38 (UTC+00:00)
> 
> > 
> > On 22/07/2019 12:39, Jose Abreu wrote:
> > > From: Lars Persson 
> > > Date: Jul/22/2019, 12:11:50 (UTC+00:00)
> > > 
> > >> On Mon, Jul 22, 2019 at 12:18 PM Ilias Apalodimas
> > >>  wrote:
> > >>>
> > >>> On Thu, Jul 18, 2019 at 07:48:04AM +, Jose Abreu wrote:
> >  From: Jon Hunter 
> >  Date: Jul/17/2019, 19:58:53 (UTC+00:00)
> > 
> > > Let me know if you have any thoughts.
> > 
> >  Can you try attached patch ?
> > 
> > >>>
> > >>> The log says  someone calls panic() right?
> > >>> Can we trye and figure were that happens during the stmmac init phase?
> > >>>
> > >>
> > >> The reason for the panic is hidden in this one line of the kernel logs:
> > >> Kernel panic - not syncing: Attempted to kill init! exitcode=0x000b
> > >>
> > >> The init process is killed by SIGSEGV (signal 11 = 0xb).
> > >>
> > >> I would suggest you look for data corruption bugs in the RX path. If
> > >> the code is fetched from the NFS mount then a corrupt RX buffer can
> > >> trigger a crash in userspace.
> > >>
> > >> /Lars
> > > 
> > > 
> > > Jon, I'm not familiar with ARM. Are the buffer addresses being allocated 
> > > in a coherent region ? Can you try attached patch which adds full memory 
> > > barrier before the sync ?
> > 
> > TBH I am not sure about the buffer addresses either. The attached patch
> > did not help. Same problem persists.
> 
> OK. I'm just guessing now at this stage but can you disable SMP ?
> 
> We have to narrow down if this is coherency issue but you said that 
> booting without NFS and then mounting manually the share works ... So, 
> can you share logs with same debug prints in this condition in order to 
> compare ?

Jon, I have one ARM based board and I can't face your issue but I 
noticed that my buffer addresses are being mapped using SWIOTLB. Can you 
disable IOMMU support on your setup and let me know if the problem 
persists ?

---
Thanks,
Jose Miguel Abreu


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-22 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/22/2019, 13:05:38 (UTC+00:00)

> 
> On 22/07/2019 12:39, Jose Abreu wrote:
> > From: Lars Persson 
> > Date: Jul/22/2019, 12:11:50 (UTC+00:00)
> > 
> >> On Mon, Jul 22, 2019 at 12:18 PM Ilias Apalodimas
> >>  wrote:
> >>>
> >>> On Thu, Jul 18, 2019 at 07:48:04AM +, Jose Abreu wrote:
>  From: Jon Hunter 
>  Date: Jul/17/2019, 19:58:53 (UTC+00:00)
> 
> > Let me know if you have any thoughts.
> 
>  Can you try attached patch ?
> 
> >>>
> >>> The log says  someone calls panic() right?
> >>> Can we trye and figure were that happens during the stmmac init phase?
> >>>
> >>
> >> The reason for the panic is hidden in this one line of the kernel logs:
> >> Kernel panic - not syncing: Attempted to kill init! exitcode=0x000b
> >>
> >> The init process is killed by SIGSEGV (signal 11 = 0xb).
> >>
> >> I would suggest you look for data corruption bugs in the RX path. If
> >> the code is fetched from the NFS mount then a corrupt RX buffer can
> >> trigger a crash in userspace.
> >>
> >> /Lars
> > 
> > 
> > Jon, I'm not familiar with ARM. Are the buffer addresses being allocated 
> > in a coherent region ? Can you try attached patch which adds full memory 
> > barrier before the sync ?
> 
> TBH I am not sure about the buffer addresses either. The attached patch
> did not help. Same problem persists.

OK. I'm just guessing now at this stage but can you disable SMP ?

We have to narrow down if this is coherency issue but you said that 
booting without NFS and then mounting manually the share works ... So, 
can you share logs with same debug prints in this condition in order to 
compare ?

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-22 Thread Jon Hunter


On 22/07/2019 12:39, Jose Abreu wrote:
> From: Lars Persson 
> Date: Jul/22/2019, 12:11:50 (UTC+00:00)
> 
>> On Mon, Jul 22, 2019 at 12:18 PM Ilias Apalodimas
>>  wrote:
>>>
>>> On Thu, Jul 18, 2019 at 07:48:04AM +, Jose Abreu wrote:
 From: Jon Hunter 
 Date: Jul/17/2019, 19:58:53 (UTC+00:00)

> Let me know if you have any thoughts.

 Can you try attached patch ?

>>>
>>> The log says  someone calls panic() right?
>>> Can we trye and figure were that happens during the stmmac init phase?
>>>
>>
>> The reason for the panic is hidden in this one line of the kernel logs:
>> Kernel panic - not syncing: Attempted to kill init! exitcode=0x000b
>>
>> The init process is killed by SIGSEGV (signal 11 = 0xb).
>>
>> I would suggest you look for data corruption bugs in the RX path. If
>> the code is fetched from the NFS mount then a corrupt RX buffer can
>> trigger a crash in userspace.
>>
>> /Lars
> 
> 
> Jon, I'm not familiar with ARM. Are the buffer addresses being allocated 
> in a coherent region ? Can you try attached patch which adds full memory 
> barrier before the sync ?

TBH I am not sure about the buffer addresses either. The attached patch
did not help. Same problem persists.

Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-22 Thread Jose Abreu
From: Lars Persson 
Date: Jul/22/2019, 12:11:50 (UTC+00:00)

> On Mon, Jul 22, 2019 at 12:18 PM Ilias Apalodimas
>  wrote:
> >
> > On Thu, Jul 18, 2019 at 07:48:04AM +, Jose Abreu wrote:
> > > From: Jon Hunter 
> > > Date: Jul/17/2019, 19:58:53 (UTC+00:00)
> > >
> > > > Let me know if you have any thoughts.
> > >
> > > Can you try attached patch ?
> > >
> >
> > The log says  someone calls panic() right?
> > Can we trye and figure were that happens during the stmmac init phase?
> >
> 
> The reason for the panic is hidden in this one line of the kernel logs:
> Kernel panic - not syncing: Attempted to kill init! exitcode=0x000b
> 
> The init process is killed by SIGSEGV (signal 11 = 0xb).
> 
> I would suggest you look for data corruption bugs in the RX path. If
> the code is fetched from the NFS mount then a corrupt RX buffer can
> trigger a crash in userspace.
> 
> /Lars


Jon, I'm not familiar with ARM. Are the buffer addresses being allocated 
in a coherent region ? Can you try attached patch which adds full memory 
barrier before the sync ?

---
Thanks,
Jose Miguel Abreu


0001-net-stmmac-Add-memory-barrier.patch
Description: 0001-net-stmmac-Add-memory-barrier.patch


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-22 Thread Lars Persson
On Mon, Jul 22, 2019 at 12:18 PM Ilias Apalodimas
 wrote:
>
> On Thu, Jul 18, 2019 at 07:48:04AM +, Jose Abreu wrote:
> > From: Jon Hunter 
> > Date: Jul/17/2019, 19:58:53 (UTC+00:00)
> >
> > > Let me know if you have any thoughts.
> >
> > Can you try attached patch ?
> >
>
> The log says  someone calls panic() right?
> Can we trye and figure were that happens during the stmmac init phase?
>

The reason for the panic is hidden in this one line of the kernel logs:
Kernel panic - not syncing: Attempted to kill init! exitcode=0x000b

The init process is killed by SIGSEGV (signal 11 = 0xb).

I would suggest you look for data corruption bugs in the RX path. If
the code is fetched from the NFS mount then a corrupt RX buffer can
trigger a crash in userspace.

/Lars


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-22 Thread Jon Hunter


On 22/07/2019 10:57, Jose Abreu wrote:

...

> Also, please add attached patch. You'll get a compiler warning, just 
> disregard it.

Here you are ...

https://paste.ubuntu.com/p/H9Mvv37vN9/

Cheers
Jon

-- 
nvpublic


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-22 Thread Ilias Apalodimas
On Thu, Jul 18, 2019 at 07:48:04AM +, Jose Abreu wrote:
> From: Jon Hunter 
> Date: Jul/17/2019, 19:58:53 (UTC+00:00)
> 
> > Let me know if you have any thoughts.
> 
> Can you try attached patch ?
> 

The log says  someone calls panic() right?
Can we trye and figure were that happens during the stmmac init phase?

Thanks
/Ilias



RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-22 Thread Jose Abreu
From: Jose Abreu 
Date: Jul/22/2019, 10:47:44 (UTC+00:00)

> From: Jon Hunter 
> Date: Jul/22/2019, 10:37:18 (UTC+00:00)
> 
> > 
> > On 22/07/2019 08:23, Jose Abreu wrote:
> > > From: Jon Hunter 
> > > Date: Jul/19/2019, 14:35:52 (UTC+00:00)
> > > 
> > >>
> > >> On 19/07/2019 13:32, Jose Abreu wrote:
> > >>> From: Jon Hunter 
> > >>> Date: Jul/19/2019, 13:30:10 (UTC+00:00)
> > >>>
> >  I booted the board without using NFS and then started used dhclient to
> >  bring up the network interface and it appears to be working fine. I can
> >  even mount the NFS share fine. So it does appear to be particular to
> >  using NFS to mount the rootfs.
> > >>>
> > >>> Damn. Can you send me your .config ?
> > >>
> > >> Yes no problem. Attached.
> > > 
> > > Can you compile your image without modules (i.e. all built-in) and let 
> > > me know if the error still happens ?
> > 
> > I simply removed the /lib/modules directory from the NFS share and
> > verified that I still see the same issue. So it is not loading the
> > modules that is a problem.
> 
> Well, I meant that loading modules can be an issue but that's not the 
> way to verify that.
> 
> You need to have all modules built-in so that it proves that no module 
> will try to be loaded.
> 
> Anyway, this is probably not the cause as you wouldn't even be able to 
> compile kernel if you need a symbol from a module with stmmac built-in. 
> Kconfig would complain about that.
> 
> The other cause could be data corruption in the RX path. Are you able to 
> send me packet dump by running wireshark either in the transmitter side 
> (i.e. NFS server), or using some kind of switch ?
> 
> ---
> Thanks,
> Jose Miguel Abreu

Also, please add attached patch. You'll get a compiler warning, just 
disregard it.

---
Thanks,
Jose Miguel Abreu


0001-net-stmmac-Debug-print.patch
Description: 0001-net-stmmac-Debug-print.patch


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-22 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/22/2019, 10:37:18 (UTC+00:00)

> 
> On 22/07/2019 08:23, Jose Abreu wrote:
> > From: Jon Hunter 
> > Date: Jul/19/2019, 14:35:52 (UTC+00:00)
> > 
> >>
> >> On 19/07/2019 13:32, Jose Abreu wrote:
> >>> From: Jon Hunter 
> >>> Date: Jul/19/2019, 13:30:10 (UTC+00:00)
> >>>
>  I booted the board without using NFS and then started used dhclient to
>  bring up the network interface and it appears to be working fine. I can
>  even mount the NFS share fine. So it does appear to be particular to
>  using NFS to mount the rootfs.
> >>>
> >>> Damn. Can you send me your .config ?
> >>
> >> Yes no problem. Attached.
> > 
> > Can you compile your image without modules (i.e. all built-in) and let 
> > me know if the error still happens ?
> 
> I simply removed the /lib/modules directory from the NFS share and
> verified that I still see the same issue. So it is not loading the
> modules that is a problem.

Well, I meant that loading modules can be an issue but that's not the 
way to verify that.

You need to have all modules built-in so that it proves that no module 
will try to be loaded.

Anyway, this is probably not the cause as you wouldn't even be able to 
compile kernel if you need a symbol from a module with stmmac built-in. 
Kconfig would complain about that.

The other cause could be data corruption in the RX path. Are you able to 
send me packet dump by running wireshark either in the transmitter side 
(i.e. NFS server), or using some kind of switch ?

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-22 Thread Jon Hunter


On 22/07/2019 08:23, Jose Abreu wrote:
> From: Jon Hunter 
> Date: Jul/19/2019, 14:35:52 (UTC+00:00)
> 
>>
>> On 19/07/2019 13:32, Jose Abreu wrote:
>>> From: Jon Hunter 
>>> Date: Jul/19/2019, 13:30:10 (UTC+00:00)
>>>
 I booted the board without using NFS and then started used dhclient to
 bring up the network interface and it appears to be working fine. I can
 even mount the NFS share fine. So it does appear to be particular to
 using NFS to mount the rootfs.
>>>
>>> Damn. Can you send me your .config ?
>>
>> Yes no problem. Attached.
> 
> Can you compile your image without modules (i.e. all built-in) and let 
> me know if the error still happens ?

I simply removed the /lib/modules directory from the NFS share and
verified that I still see the same issue. So it is not loading the
modules that is a problem.

Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-22 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/19/2019, 14:35:52 (UTC+00:00)

> 
> On 19/07/2019 13:32, Jose Abreu wrote:
> > From: Jon Hunter 
> > Date: Jul/19/2019, 13:30:10 (UTC+00:00)
> > 
> >> I booted the board without using NFS and then started used dhclient to
> >> bring up the network interface and it appears to be working fine. I can
> >> even mount the NFS share fine. So it does appear to be particular to
> >> using NFS to mount the rootfs.
> > 
> > Damn. Can you send me your .config ?
> 
> Yes no problem. Attached.

Can you compile your image without modules (i.e. all built-in) and let 
me know if the error still happens ?

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-19 Thread Jon Hunter


On 19/07/2019 13:28, Jose Abreu wrote:
> From: Jose Abreu 
> Date: Jul/19/2019, 11:25:41 (UTC+00:00)
> 
>> Thanks. Can you add attached patch and check if WARN is triggered ? 
> 
> BTW, also add the attached one in this mail. The WARN will probably 
> never get triggered without it.
> 
> Can you also print "buf->addr" after the WARN_ON ?

I added this patch, but still no warning.

Cheers
Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-19 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/19/2019, 13:30:10 (UTC+00:00)

> I booted the board without using NFS and then started used dhclient to
> bring up the network interface and it appears to be working fine. I can
> even mount the NFS share fine. So it does appear to be particular to
> using NFS to mount the rootfs.

Damn. Can you send me your .config ?

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-19 Thread Jon Hunter


On 19/07/2019 11:25, Jose Abreu wrote:

...

> Thanks. Can you add attached patch and check if WARN is triggered ? And 
> it would be good to know whether this is boot specific crash or just 
> doesn't work at all, i.e. not using NFS to mount rootfs and instead 
> manually configure interface and send/receive packets.

With this patch applied I did not see the WARN trigger.

I booted the board without using NFS and then started used dhclient to
bring up the network interface and it appears to be working fine. I can
even mount the NFS share fine. So it does appear to be particular to
using NFS to mount the rootfs.

Cheers
Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-19 Thread Jose Abreu
From: Jose Abreu 
Date: Jul/19/2019, 11:25:41 (UTC+00:00)

> Thanks. Can you add attached patch and check if WARN is triggered ? 

BTW, also add the attached one in this mail. The WARN will probably 
never get triggered without it.

Can you also print "buf->addr" after the WARN_ON ?

---
Thanks,
Jose Miguel Abreu


0001-net-stmmac-Use-kcalloc-instead-of-kmalloc_array.patch
Description: 0001-net-stmmac-Use-kcalloc-instead-of-kmalloc_array.patch


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-19 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/19/2019, 09:49:10 (UTC+00:00)

> 
> On 19/07/2019 09:44, Jose Abreu wrote:
> > From: Jon Hunter 
> > Date: Jul/19/2019, 09:37:49 (UTC+00:00)
> > 
> >>
> >> On 19/07/2019 08:51, Jose Abreu wrote:
> >>> From: Jon Hunter 
> >>> Date: Jul/18/2019, 10:16:20 (UTC+00:00)
> >>>
>  Have you tried using NFS on a board with this ethernet controller?
> >>>
> >>> I'm having some issues setting up the NFS server in order to replicate 
> >>> so this may take some time.
> >>
> >> If that's the case, we may wish to consider reverting this for now as it
> >> is preventing our board from booting. Appears to revert cleanly on top
> >> of mainline.
> >>
> >>> Are you able to add some debug in stmmac_init_rx_buffers() to see what's 
> >>> the buffer address ?
> >>
> >> If you have a debug patch you would like me to apply and test with I
> >> can. However, it is best you prepare the patch as maybe I will not dump
> >> the appropriate addresses.
> >>
> >> Cheers
> >> Jon
> >>
> >> -- 
> >> nvpublic
> > 
> > Send me full boot log please.
> 
> Please see: 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__paste.debian.net_1092277_=DwICaQ=DPL6_X_6JkXFx7AXWqB0tg=WHDsc6kcWAl4i96Vm5hJ_19IJiuxx_p_Rzo2g-uHDKw=iHahNPEIegk1merE1utjRvC8Xoz5jQlNb1VRzPHk4-4=4UTbo8miS4M-PmGNup4OXgJOosgvJQZm9wcvWYjJs7k=
>  
> 
> Cheers
> Jon
> 
> -- 
> nvpublic

Thanks. Can you add attached patch and check if WARN is triggered ? And 
it would be good to know whether this is boot specific crash or just 
doesn't work at all, i.e. not using NFS to mount rootfs and instead 
manually configure interface and send/receive packets.

---
Thanks,
Jose Miguel Abreu


0001-net-stmmac-Add-page-sanity-check.patch
Description: 0001-net-stmmac-Add-page-sanity-check.patch


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-19 Thread Jon Hunter


On 19/07/2019 09:44, Jose Abreu wrote:
> From: Jon Hunter 
> Date: Jul/19/2019, 09:37:49 (UTC+00:00)
> 
>>
>> On 19/07/2019 08:51, Jose Abreu wrote:
>>> From: Jon Hunter 
>>> Date: Jul/18/2019, 10:16:20 (UTC+00:00)
>>>
 Have you tried using NFS on a board with this ethernet controller?
>>>
>>> I'm having some issues setting up the NFS server in order to replicate 
>>> so this may take some time.
>>
>> If that's the case, we may wish to consider reverting this for now as it
>> is preventing our board from booting. Appears to revert cleanly on top
>> of mainline.
>>
>>> Are you able to add some debug in stmmac_init_rx_buffers() to see what's 
>>> the buffer address ?
>>
>> If you have a debug patch you would like me to apply and test with I
>> can. However, it is best you prepare the patch as maybe I will not dump
>> the appropriate addresses.
>>
>> Cheers
>> Jon
>>
>> -- 
>> nvpublic
> 
> Send me full boot log please.

Please see: https://paste.debian.net/1092277/

Cheers
Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-19 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/19/2019, 09:37:49 (UTC+00:00)

> 
> On 19/07/2019 08:51, Jose Abreu wrote:
> > From: Jon Hunter 
> > Date: Jul/18/2019, 10:16:20 (UTC+00:00)
> > 
> >> Have you tried using NFS on a board with this ethernet controller?
> > 
> > I'm having some issues setting up the NFS server in order to replicate 
> > so this may take some time.
> 
> If that's the case, we may wish to consider reverting this for now as it
> is preventing our board from booting. Appears to revert cleanly on top
> of mainline.
> 
> > Are you able to add some debug in stmmac_init_rx_buffers() to see what's 
> > the buffer address ?
> 
> If you have a debug patch you would like me to apply and test with I
> can. However, it is best you prepare the patch as maybe I will not dump
> the appropriate addresses.
> 
> Cheers
> Jon
> 
> -- 
> nvpublic

Send me full boot log please.

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-19 Thread Jon Hunter


On 19/07/2019 08:51, Jose Abreu wrote:
> From: Jon Hunter 
> Date: Jul/18/2019, 10:16:20 (UTC+00:00)
> 
>> Have you tried using NFS on a board with this ethernet controller?
> 
> I'm having some issues setting up the NFS server in order to replicate 
> so this may take some time.

If that's the case, we may wish to consider reverting this for now as it
is preventing our board from booting. Appears to revert cleanly on top
of mainline.

> Are you able to add some debug in stmmac_init_rx_buffers() to see what's 
> the buffer address ?

If you have a debug patch you would like me to apply and test with I
can. However, it is best you prepare the patch as maybe I will not dump
the appropriate addresses.

Cheers
Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-19 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/18/2019, 10:16:20 (UTC+00:00)

> Have you tried using NFS on a board with this ethernet controller?

I'm having some issues setting up the NFS server in order to replicate 
so this may take some time.

Are you able to add some debug in stmmac_init_rx_buffers() to see what's 
the buffer address ?

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-18 Thread Jon Hunter


On 18/07/2019 08:48, Jose Abreu wrote:
> From: Jon Hunter 
> Date: Jul/17/2019, 19:58:53 (UTC+00:00)
> 
>> Let me know if you have any thoughts.
> 
> Can you try attached patch ?

Yes this did not help. I tried enabling the following but no more output
is seen.

CONFIG_DMA_API_DEBUG=y
CONFIG_DMA_API_DEBUG_SG=y

Have you tried using NFS on a board with this ethernet controller?

Cheers,
Jon

-- 
nvpublic


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-18 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/17/2019, 19:58:53 (UTC+00:00)

> Let me know if you have any thoughts.

Can you try attached patch ?

---
Thanks,
Jose Miguel Abreu


0001-net-stmmac-RX-Descriptors-need-to-be-clean-before-se.patch
Description: 0001-net-stmmac-RX-Descriptors-need-to-be-clean-before-se.patch


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-18 Thread Jose Abreu
From: Jon Hunter 
Date: Jul/17/2019, 19:58:53 (UTC+00:00)

> I am seeing a boot regression on one of our Tegra boards with both
> mainline and -next. Bisecting is pointing to this commit and reverting
> this commit on top of mainline fixes the problem. Unfortunately, there
> is not much of a backtrace but what I have captured is below. 
> 
> Please note that this is seen on a system that is using NFS to mount
> the rootfs and the crash occurs right around the point the rootfs is
> mounted.
> 
> Let me know if you have any thoughts.
> 
> Cheers
> Jon 
> 
> [   12.221843] Kernel panic - not syncing: Attempted to kill init! 
> exitcode=0x000b
> [   12.229485] CPU: 5 PID: 1 Comm: init Tainted: G S
> 5.2.0-11500-g916f562fb28a #18
> [   12.238076] Hardware name: NVIDIA Tegra186 P2771- Development Board 
> (DT)
> [   12.245105] Call trace:
> [   12.247548]  dump_backtrace+0x0/0x150
> [   12.251199]  show_stack+0x14/0x20
> [   12.254505]  dump_stack+0x9c/0xc4
> [   12.257809]  panic+0x13c/0x32c
> [   12.260853]  complete_and_exit+0x0/0x20
> [   12.264676]  do_group_exit+0x34/0x98
> [   12.268241]  get_signal+0x104/0x668
> [   12.271718]  do_notify_resume+0x2ac/0x380
> [   12.275716]  work_pending+0x8/0x10
> [   12.279109] SMP: stopping secondary CPUs
> [   12.283025] Kernel Offset: disabled
> [   12.286502] CPU features: 0x0002,20806000
> [   12.290499] Memory Limit: none
> [   12.293548] ---[ end Kernel panic - not syncing: Attempted to kill init! 
> exitcode=0x000b ]---
> 
> -- 
> nvpublic

You don't have any more data ? Can you activate DMA-API debug and check 
if there is any more info outputted ?

---
Thanks,
Jose Miguel Abreu


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-17 Thread Jon Hunter


On 03/07/2019 11:37, Jose Abreu wrote:
> Mapping and unmapping DMA region is an high bottleneck in stmmac driver,
> specially in the RX path.
> 
> This commit introduces support for Page Pool API and uses it in all RX
> queues. With this change, we get more stable troughput and some increase
> of banwidth with iperf:
>   - MAC1000 - 950 Mbps
>   - XGMAC: 9.22 Gbps
I am seeing a boot regression on one of our Tegra boards with both
mainline and -next. Bisecting is pointing to this commit and reverting
this commit on top of mainline fixes the problem. Unfortunately, there
is not much of a backtrace but what I have captured is below. 

Please note that this is seen on a system that is using NFS to mount
the rootfs and the crash occurs right around the point the rootfs is
mounted.

Let me know if you have any thoughts.

Cheers
Jon 

[   12.221843] Kernel panic - not syncing: Attempted to kill init! 
exitcode=0x000b
[   12.229485] CPU: 5 PID: 1 Comm: init Tainted: G S
5.2.0-11500-g916f562fb28a #18
[   12.238076] Hardware name: NVIDIA Tegra186 P2771- Development Board (DT)
[   12.245105] Call trace:
[   12.247548]  dump_backtrace+0x0/0x150
[   12.251199]  show_stack+0x14/0x20
[   12.254505]  dump_stack+0x9c/0xc4
[   12.257809]  panic+0x13c/0x32c
[   12.260853]  complete_and_exit+0x0/0x20
[   12.264676]  do_group_exit+0x34/0x98
[   12.268241]  get_signal+0x104/0x668
[   12.271718]  do_notify_resume+0x2ac/0x380
[   12.275716]  work_pending+0x8/0x10
[   12.279109] SMP: stopping secondary CPUs
[   12.283025] Kernel Offset: disabled
[   12.286502] CPU features: 0x0002,20806000
[   12.290499] Memory Limit: none
[   12.293548] ---[ end Kernel panic - not syncing: Attempted to kill init! 
exitcode=0x000b ]---

-- 
nvpublic


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Jesper Dangaard Brouer
On Thu, 4 Jul 2019 15:18:19 +
Jose Abreu  wrote:

> From: Jesper Dangaard Brouer 
> 
> > You can just use page_pool_free() (p.s I'm working on reintroducing
> > page_pool_destroy wrapper).  As you say, you will not have in-flight
> > frames/pages in this driver use-case.  
> 
> Well, if I remove the request_shutdown() it will trigger the "API usage 
> violation" WARN ...
> 
> I think this is due to alloc cache only be freed in request_shutdown(), 
> or I'm having some leak :D

Sorry, for not being clear.  You of-cause first have to call
page_pool_request_shutdown() and then call page_pool_free().

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Jose Abreu
From: Jesper Dangaard Brouer 

> You can just use page_pool_free() (p.s I'm working on reintroducing
> page_pool_destroy wrapper).  As you say, you will not have in-flight
> frames/pages in this driver use-case.

Well, if I remove the request_shutdown() it will trigger the "API usage 
violation" WARN ...

I think this is due to alloc cache only be freed in request_shutdown(), 
or I'm having some leak :D


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Jesper Dangaard Brouer
On Thu, 4 Jul 2019 14:45:59 +
Jose Abreu  wrote:

> From: Jesper Dangaard Brouer 
> 
> > The page_pool_request_shutdown() API return indication if there are any
> > in-flight frames/pages, to know when it is safe to call
> > page_pool_free(), which you are also missing a call to.
> > 
> > This page_pool_request_shutdown() is only intended to be called from
> > xdp_rxq_info_unreg() code, that handles and schedule a work queue if it
> > need to wait for in-flight frames/pages.  
> 
> So you mean I can't call it or I should implement the same deferred work 
> ?
> 
> Notice that in stmmac case there will be no in-flight frames/pages 
> because we free them all before calling this ...

You can just use page_pool_free() (p.s I'm working on reintroducing
page_pool_destroy wrapper).  As you say, you will not have in-flight
frames/pages in this driver use-case.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Jose Abreu
From: Jesper Dangaard Brouer 

> The page_pool_request_shutdown() API return indication if there are any
> in-flight frames/pages, to know when it is safe to call
> page_pool_free(), which you are also missing a call to.
> 
> This page_pool_request_shutdown() is only intended to be called from
> xdp_rxq_info_unreg() code, that handles and schedule a work queue if it
> need to wait for in-flight frames/pages.

So you mean I can't call it or I should implement the same deferred work 
?

Notice that in stmmac case there will be no in-flight frames/pages 
because we free them all before calling this ...


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Ilias Apalodimas
Hi Jose, 

> Thank you all for your review comments !
> 
> From: Ilias Apalodimas 
> 
> > That's why i was concerned on what will happen on > 1000b frames and what 
> > the
> > memory pressure is going to be. 
> > The trade off here is copying vs mapping/unmapping.
> 
> Well, the performance numbers I mentioned are for TSO with default MTU 
> (1500) and using iperf3 with zero-copy. Here follows netperf:
> 

Ok i guess this should be fine. Here's why.
You'll allocate an extra memory from page pool API which equals
the number of descriptors * 1 page.
You also allocate SKB's to copy the data and recycle the page pool buffers.
So page_pool won't add any significant memory pressure since we expect *all*
it's buffers to be recycled. 
The SKBs are allocated anyway in the current driver so bottom line you trade off
some memory (the page_pool buffers) + a memcpy per packet and skip the dma
map/unmap which is the bottleneck in your hardware. 
I think it's fine

Cheers
/Ilias


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Jose Abreu
Thank you all for your review comments !

From: Ilias Apalodimas 

> That's why i was concerned on what will happen on > 1000b frames and what the
> memory pressure is going to be. 
> The trade off here is copying vs mapping/unmapping.

Well, the performance numbers I mentioned are for TSO with default MTU 
(1500) and using iperf3 with zero-copy. Here follows netperf:

---
# netperf -c -C -H 1.2.3.2 -T 7,7 -t TCP_SENDFILE
TCP SENDFILE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 1.2.3.2 
(1.2.3.2) port 0 AF_INET : demo : cpu bind
Recv   SendSend  Utilization   Service 
Demand
Socket Socket  Message  Elapsed  Send Recv Send
Recv
Size   SizeSize Time Throughput  localremote   local   
remote
bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB   
us/KB

131072  16384  1638410.00  9132.37   6.13 11.790.440   
0.846  

---
# netperf -c -C -H 1.2.3.2 -T 7,7 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
1.2.3.2 (1.2.3.2) port 0 AF_INET : demo : cpu bind
Recv   SendSend  Utilization   Service 
Demand
Socket Socket  Message  Elapsed  Send Recv Send
Recv
Size   SizeSize Time Throughput  localremote   local   
remote
bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB   
us/KB

131072  16384  1638410.01  9041.21   3.20 11.750.232   
0.852  

---
# netperf -c -C -H 1.2.3.2 -T 7,7 -t UDP_STREAM
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
1.2.3.2 (1.2.3.2) port 0 AF_INET : demo : cpu bind
Socket  Message  Elapsed  Messages   CPU  
Service
SizeSize Time Okay Errors   Throughput   Util Demand
bytes   bytessecs#  #   10^6bits/sec % SS us/KB

212992   65507   10.00  114455  0 5997.0 12.551.371 
212992   10.00  1144555997.0 8.12 0.887

---
# netperf -c -C -H 1.2.3.2 -T 7,7 -t UDP_STREAM -- -m 64
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
1.2.3.2 (1.2.3.2) port 0 AF_INET : demo : cpu bind
Socket  Message  Elapsed  Messages   CPU  
Service
SizeSize Time Okay Errors   Throughput   Util Demand
bytes   bytessecs#  #   10^6bits/sec % SS us/KB

212992  64   10.00 4013480  0  205.4 12.5139.918
212992   10.00 4013480 205.4 7.99 25.482

---
# netperf -c -C -H 1.2.3.2 -T 7,7 -t UDP_STREAM -- -m 128
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
1.2.3.2 (1.2.3.2) port 0 AF_INET : demo : cpu bind
Socket  Message  Elapsed  Messages   CPU  
Service
SizeSize Time Okay Errors   Throughput   Util Demand
bytes   bytessecs#  #   10^6bits/sec % SS us/KB

212992 128   10.00 3950480  0  404.4 12.5020.255
212992   10.00 3950442 404.4 7.70 12.485

---
# netperf -c -C -H 1.2.3.2 -T 7,7 -t UDP_STREAM -- -m 1024
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
1.2.3.2 (1.2.3.2) port 0 AF_INET : demo : cpu bind
Socket  Message  Elapsed  Messages   CPU  
Service
SizeSize Time Okay Errors   Throughput   Util Demand
bytes   bytessecs#  #   10^6bits/sec % SS us/KB

2129921024   10.00 3466506  0 2838.8 12.502.886 
212992   10.00 34665062838.8 7.39 1.707


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Ilias Apalodimas
On Thu, Jul 04, 2019 at 02:14:28PM +0200, Arnd Bergmann wrote:
> On Thu, Jul 4, 2019 at 12:31 PM Ilias Apalodimas
>  wrote:
> > > On Wed,  3 Jul 2019 12:37:50 +0200
> > > Jose Abreu  wrote:
> 
> > 1. page pool allocs packet. The API doesn't sync but i *think* you don't 
> > have to
> > explicitly since the CPU won't touch that buffer until the NAPI handler 
> > kicks
> > in. On the napi handler you need to dma_sync_single_for_cpu() and process 
> > the
> > packet.
> 
> > So bvottom line i *think* we can skip the dma_sync_single_for_device() on 
> > the
> > initial allocation *only*. If am terribly wrong please let me know :)
> 
> I think you have to do a sync_single_for_device /somewhere/ before the
> buffer is given to the device. On a non-cache-coherent machine with
> a write-back cache, there may be dirty cache lines that get written back
> after the device DMA's data into it (e.g. from a previous memset
> from before the buffer got freed), so you absolutely need to flush any
> dirty cache lines on it first.
Ok my bad here i forgot to add "when coherency is there", since the driver
i had in mind runs on such a device (i think this is configurable though so i'll
add the sync explicitly to make sure we won't break any configurations).

In general you are right, thanks for the explanation!
> You may also need to invalidate the cache lines in the following
> sync_single_for_cpu() to eliminate clean cache lines with stale data
> that got there when speculatively reading between the cache-invalidate
> and the DMA.
> 
>Arnd


Thanks!
/Ilias


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Arnd Bergmann
On Thu, Jul 4, 2019 at 12:31 PM Ilias Apalodimas
 wrote:
> > On Wed,  3 Jul 2019 12:37:50 +0200
> > Jose Abreu  wrote:

> 1. page pool allocs packet. The API doesn't sync but i *think* you don't have 
> to
> explicitly since the CPU won't touch that buffer until the NAPI handler kicks
> in. On the napi handler you need to dma_sync_single_for_cpu() and process the
> packet.

> So bvottom line i *think* we can skip the dma_sync_single_for_device() on the
> initial allocation *only*. If am terribly wrong please let me know :)

I think you have to do a sync_single_for_device /somewhere/ before the
buffer is given to the device. On a non-cache-coherent machine with
a write-back cache, there may be dirty cache lines that get written back
after the device DMA's data into it (e.g. from a previous memset
from before the buffer got freed), so you absolutely need to flush any
dirty cache lines on it first.
You may also need to invalidate the cache lines in the following
sync_single_for_cpu() to eliminate clean cache lines with stale data
that got there when speculatively reading between the cache-invalidate
and the DMA.

   Arnd


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Ilias Apalodimas
Hi Jesper,

> On Thu, 4 Jul 2019 10:13:37 +
> Jose Abreu  wrote:
> > > The page_pool DMA mapping cannot be "kept" when page traveling into the
> > > network stack attached to an SKB.  (Ilias and I have a long term plan[1]
> > > to allow this, but you cannot do it ATM).  
> > 
> > The reason I recycle the page is this previous call to:
> > 
> > skb_copy_to_linear_data()
> > 
> > So, technically, I'm syncing to CPU the page(s) and then memcpy to a 
> > previously allocated SKB ... So it's safe to just recycle the mapping I 
> > think.
> 
> I didn't notice the skb_copy_to_linear_data(), will copy the entire
> frame, thus leaving the page unused and avail for recycle.

Yea this is essentially a 'copybreak' without the byte limitation that other
drivers usually impose (remember mvneta was doing this for all packets < 256b)

That's why i was concerned on what will happen on > 1000b frames and what the
memory pressure is going to be. 
The trade off here is copying vs mapping/unmapping.

> 
> Then it looks like you are doing the correct thing.  I will appreciate
> if you could add a comment above the call like:
> 
>/* Data payload copied into SKB, page ready for recycle */
>page_pool_recycle_direct(rx_q->page_pool, buf->page);
> 
> 
> > Its kind of using bounce buffers and I do see performance gain in this 
> > (I think the reason is because my setup uses swiotlb for DMA mapping).
> > 
> > Anyway, I'm open to some suggestions on how to improve this ...
> 
> I was surprised to see page_pool being used outside the surrounding XDP
> APIs (included/net/xdp.h).  For you use-case, where you "just" use
> page_pool as a driver-local fast recycle-allocator for RX-ring that
> keeps pages DMA mapped, it does make a lot of sense.  It simplifies the
> driver a fair amount:
> 
>   3 files changed, 63 insertions(+), 144 deletions(-)
> 
> Thanks for demonstrating a use-case for page_pool besides XDP, and for
> simplifying a driver with this.

Same here thanks Jose,

> 
> 
> > > Also remember that the page_pool requires you driver to do the
> > > DMA-sync operation.  I see a dma_sync_single_for_cpu(), but I
> > > didn't see a dma_sync_single_for_device() (well, I noticed one
> > > getting removed). (For some HW Ilias tells me that the
> > > dma_sync_single_for_device can be elided, so maybe this can still
> > > be correct for you).  
> > 
> > My HW just needs descriptors refilled which are in different coherent 
> > region so I don't see any reason for dma_sync_single_for_device() ...
> 
> For you use-case, given you are copying out the data, and not writing
> into it, then I don't think you need to do sync for device (before
> giving the device the page again for another RX-ring cycle).
> 
> The way I understand the danger: if writing to the DMA memory region,
> and not doing the DMA-sync for-device, then the HW/coherency-system can
> write-back the memory later.  Which creates a race with the DMA-device,
> if it is receiving a packet and is doing a write into same DMA memory
> region.  Someone correct me if I misunderstood this...

Similar understanding here

Cheers
/Ilias


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Jesper Dangaard Brouer
On Thu, 4 Jul 2019 10:13:37 +
Jose Abreu  wrote:

> From: Jesper Dangaard Brouer 
> 
> > The page_pool DMA mapping cannot be "kept" when page traveling into the
> > network stack attached to an SKB.  (Ilias and I have a long term plan[1]
> > to allow this, but you cannot do it ATM).  
> 
> The reason I recycle the page is this previous call to:
> 
>   skb_copy_to_linear_data()
> 
> So, technically, I'm syncing to CPU the page(s) and then memcpy to a 
> previously allocated SKB ... So it's safe to just recycle the mapping I 
> think.

I didn't notice the skb_copy_to_linear_data(), will copy the entire
frame, thus leaving the page unused and avail for recycle.

Then it looks like you are doing the correct thing.  I will appreciate
if you could add a comment above the call like:

   /* Data payload copied into SKB, page ready for recycle */
   page_pool_recycle_direct(rx_q->page_pool, buf->page);


> Its kind of using bounce buffers and I do see performance gain in this 
> (I think the reason is because my setup uses swiotlb for DMA mapping).
> 
> Anyway, I'm open to some suggestions on how to improve this ...

I was surprised to see page_pool being used outside the surrounding XDP
APIs (included/net/xdp.h).  For you use-case, where you "just" use
page_pool as a driver-local fast recycle-allocator for RX-ring that
keeps pages DMA mapped, it does make a lot of sense.  It simplifies the
driver a fair amount:

  3 files changed, 63 insertions(+), 144 deletions(-)

Thanks for demonstrating a use-case for page_pool besides XDP, and for
simplifying a driver with this.


> > Also remember that the page_pool requires you driver to do the
> > DMA-sync operation.  I see a dma_sync_single_for_cpu(), but I
> > didn't see a dma_sync_single_for_device() (well, I noticed one
> > getting removed). (For some HW Ilias tells me that the
> > dma_sync_single_for_device can be elided, so maybe this can still
> > be correct for you).  
> 
> My HW just needs descriptors refilled which are in different coherent 
> region so I don't see any reason for dma_sync_single_for_device() ...

For you use-case, given you are copying out the data, and not writing
into it, then I don't think you need to do sync for device (before
giving the device the page again for another RX-ring cycle).

The way I understand the danger: if writing to the DMA memory region,
and not doing the DMA-sync for-device, then the HW/coherency-system can
write-back the memory later.  Which creates a race with the DMA-device,
if it is receiving a packet and is doing a write into same DMA memory
region.  Someone correct me if I misunderstood this...

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Ilias Apalodimas
On Thu, Jul 04, 2019 at 10:13:37AM +, Jose Abreu wrote:
> From: Jesper Dangaard Brouer 
> 
> > The page_pool DMA mapping cannot be "kept" when page traveling into the
> > network stack attached to an SKB.  (Ilias and I have a long term plan[1]
> > to allow this, but you cannot do it ATM).
> 
> The reason I recycle the page is this previous call to:
> 
>   skb_copy_to_linear_data()
> 
> So, technically, I'm syncing to CPU the page(s) and then memcpy to a 
> previously allocated SKB ... So it's safe to just recycle the mapping I 
> think.
> 
> Its kind of using bounce buffers and I do see performance gain in this 
> (I think the reason is because my setup uses swiotlb for DMA mapping).

Maybe. Have you tested this on big/small packets?
Can you do a test with 64b/128b and 1024b for example?
The memcpy might be cheap for the small sized packets (and cheaper than the dma
map/unmap)

> 
> Anyway, I'm open to some suggestions on how to improve this ...
> 
> > Also remember that the page_pool requires you driver to do the DMA-sync
> > operation.  I see a dma_sync_single_for_cpu(), but I didn't see a
> > dma_sync_single_for_device() (well, I noticed one getting removed).
> > (For some HW Ilias tells me that the dma_sync_single_for_device can be
> > elided, so maybe this can still be correct for you).
> 
> My HW just needs descriptors refilled which are in different coherent 
> region so I don't see any reason for dma_sync_single_for_device() ...
I am abit overloaded at the moment. I'll try to have a look at this and get back
to you

Cheers
/Ilias


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Ilias Apalodimas
HI Jesper, Ivan,

> On Wed,  3 Jul 2019 12:37:50 +0200
> Jose Abreu  wrote:
> 
> > @@ -3547,6 +3456,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int 
> > limit, u32 queue)
> >  
> > napi_gro_receive(>rx_napi, skb);
> >  
> > +   page_pool_recycle_direct(rx_q->page_pool, buf->page);
> 
> This doesn't look correct.
> 
> The page_pool DMA mapping cannot be "kept" when page traveling into the
> network stack attached to an SKB.  (Ilias and I have a long term plan[1]
> to allow this, but you cannot do it ATM).
> 
> You will have to call:
>   page_pool_release_page(rx_q->page_pool, buf->page);
> 
> This will do a DMA-unmap, and you will likely loose your performance
> gain :-(
> 
> 
> > +   buf->page = NULL;
> > +
> > priv->dev->stats.rx_packets++;
> > priv->dev->stats.rx_bytes += frame_len;
> > }
> 
> Also remember that the page_pool requires you driver to do the DMA-sync
> operation.  I see a dma_sync_single_for_cpu(), but I didn't see a
> dma_sync_single_for_device() (well, I noticed one getting removed).
> (For some HW Ilias tells me that the dma_sync_single_for_device can be
> elided, so maybe this can still be correct for you).
On our case (and in the page_pool API in general) you have to track buffers when
both .ndo_xdp_xmit() and XDP_TX are used.
So the lifetime of a packet might be 

1. page pool allocs packet. The API doesn't sync but i *think* you don't have to
explicitly since the CPU won't touch that buffer until the NAPI handler kicks
in. On the napi handler you need to dma_sync_single_for_cpu() and process the
packet.
2a) no XDP is required so the packet is unmapped and free'd
2b) .ndo_xdp_xmit is called so tyhe buffer need to be mapped/unmapped
2c) XDP_TX is called. In that case we re-use an Rx buffer so we need to
dma_sync_single_for_device()
2a and 2b won't cause any issues
In 2c the buffer will be recycled and fed back to the device with a *correct*
sync (for_device) and all those buffers are allocated as DMA_BIDIRECTIONAL.

So bvottom line i *think* we can skip the dma_sync_single_for_device() on the
initial allocation *only*. If am terribly wrong please let me know :)

Thanks
/Ilias
> 
> 
> [1] 
> https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool02_SKB_return_callback.org
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Jose Abreu
From: Jesper Dangaard Brouer 

> The page_pool DMA mapping cannot be "kept" when page traveling into the
> network stack attached to an SKB.  (Ilias and I have a long term plan[1]
> to allow this, but you cannot do it ATM).

The reason I recycle the page is this previous call to:

skb_copy_to_linear_data()

So, technically, I'm syncing to CPU the page(s) and then memcpy to a 
previously allocated SKB ... So it's safe to just recycle the mapping I 
think.

Its kind of using bounce buffers and I do see performance gain in this 
(I think the reason is because my setup uses swiotlb for DMA mapping).

Anyway, I'm open to some suggestions on how to improve this ...

> Also remember that the page_pool requires you driver to do the DMA-sync
> operation.  I see a dma_sync_single_for_cpu(), but I didn't see a
> dma_sync_single_for_device() (well, I noticed one getting removed).
> (For some HW Ilias tells me that the dma_sync_single_for_device can be
> elided, so maybe this can still be correct for you).

My HW just needs descriptors refilled which are in different coherent 
region so I don't see any reason for dma_sync_single_for_device() ...


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Jesper Dangaard Brouer
On Wed,  3 Jul 2019 12:37:50 +0200
Jose Abreu  wrote:

> @@ -3547,6 +3456,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int 
> limit, u32 queue)
>  
>   napi_gro_receive(>rx_napi, skb);
>  
> + page_pool_recycle_direct(rx_q->page_pool, buf->page);

This doesn't look correct.

The page_pool DMA mapping cannot be "kept" when page traveling into the
network stack attached to an SKB.  (Ilias and I have a long term plan[1]
to allow this, but you cannot do it ATM).

You will have to call:
  page_pool_release_page(rx_q->page_pool, buf->page);

This will do a DMA-unmap, and you will likely loose your performance
gain :-(


> + buf->page = NULL;
> +
>   priv->dev->stats.rx_packets++;
>   priv->dev->stats.rx_bytes += frame_len;
>   }

Also remember that the page_pool requires you driver to do the DMA-sync
operation.  I see a dma_sync_single_for_cpu(), but I didn't see a
dma_sync_single_for_device() (well, I noticed one getting removed).
(For some HW Ilias tells me that the dma_sync_single_for_device can be
elided, so maybe this can still be correct for you).


[1] 
https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool02_SKB_return_callback.org
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Jesper Dangaard Brouer
On Wed,  3 Jul 2019 12:37:50 +0200
Jose Abreu  wrote:

> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -1197,26 +1197,14 @@ static int stmmac_init_rx_buffers(struct stmmac_priv 
> *priv, struct dma_desc *p,
> int i, gfp_t flags, u32 queue)
>  {
>   struct stmmac_rx_queue *rx_q = >rx_queue[queue];
> - struct sk_buff *skb;
> + struct stmmac_rx_buffer *buf = _q->buf_pool[i];
>  
> - skb = __netdev_alloc_skb_ip_align(priv->dev, priv->dma_buf_sz, flags);
> - if (!skb) {
> - netdev_err(priv->dev,
> -"%s: Rx init fails; skb is NULL\n", __func__);
> + buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
> + if (!buf->page)
>   return -ENOMEM;
> - }
> - rx_q->rx_skbuff[i] = skb;
> - rx_q->rx_skbuff_dma[i] = dma_map_single(priv->device, skb->data,
> - priv->dma_buf_sz,
> - DMA_FROM_DEVICE);
> - if (dma_mapping_error(priv->device, rx_q->rx_skbuff_dma[i])) {
> - netdev_err(priv->dev, "%s: DMA mapping error\n", __func__);
> - dev_kfree_skb_any(skb);
> - return -EINVAL;
> - }
> -
> - stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[i]);
>  
> + buf->addr = buf->page->dma_addr;

We/Ilias added a wrapper/helper function for accessing dma_addr, as it
will help us later identifying users.

 page_pool_get_dma_addr(page)

> + stmmac_set_desc_addr(priv, p, buf->addr);
>   if (priv->dma_buf_sz == BUF_SIZE_16KiB)
>   stmmac_init_desc3(priv, p);
>  


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-04 Thread Jesper Dangaard Brouer
On Wed,  3 Jul 2019 12:37:50 +0200
Jose Abreu  wrote:

> @@ -1498,8 +1479,9 @@ static void free_dma_rx_desc_resources(struct 
> stmmac_priv *priv)
> sizeof(struct dma_extended_desc),
> rx_q->dma_erx, rx_q->dma_rx_phy);
>  
> - kfree(rx_q->rx_skbuff_dma);
> - kfree(rx_q->rx_skbuff);
> + kfree(rx_q->buf_pool);
> + if (rx_q->page_pool)
> + page_pool_request_shutdown(rx_q->page_pool);
>   }
>  }
>  

The page_pool_request_shutdown() API return indication if there are any
in-flight frames/pages, to know when it is safe to call
page_pool_free(), which you are also missing a call to.

This page_pool_request_shutdown() is only intended to be called from
xdp_rxq_info_unreg() code, that handles and schedule a work queue if it
need to wait for in-flight frames/pages.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

2019-07-03 Thread Jose Abreu
++ Jesper: Who is most active committer of page pool API (?) ... Can you 
please help review this ?

From: Jose Abreu 

> Mapping and unmapping DMA region is an high bottleneck in stmmac driver,
> specially in the RX path.
> 
> This commit introduces support for Page Pool API and uses it in all RX
> queues. With this change, we get more stable troughput and some increase
> of banwidth with iperf:
>   - MAC1000 - 950 Mbps
>   - XGMAC: 9.22 Gbps
> 
> Signed-off-by: Jose Abreu 
> Cc: Joao Pinto 
> Cc: David S. Miller 
> Cc: Giuseppe Cavallaro 
> Cc: Alexandre Torgue 
> Cc: Maxime Coquelin 
> Cc: Maxime Ripard 
> Cc: Chen-Yu Tsai 
> ---
>  drivers/net/ethernet/stmicro/stmmac/Kconfig   |   1 +
>  drivers/net/ethernet/stmicro/stmmac/stmmac.h  |  10 +-
>  drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 196 
> ++
>  3 files changed, 63 insertions(+), 144 deletions(-)
> 
> diff --git a/drivers/net/ethernet/stmicro/stmmac/Kconfig 
> b/drivers/net/ethernet/stmicro/stmmac/Kconfig
> index 943189dcccb1..2325b40dff6e 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/Kconfig
> +++ b/drivers/net/ethernet/stmicro/stmmac/Kconfig
> @@ -3,6 +3,7 @@ config STMMAC_ETH
>   tristate "STMicroelectronics Multi-Gigabit Ethernet driver"
>   depends on HAS_IOMEM && HAS_DMA
>   select MII
> + select PAGE_POOL
>   select PHYLINK
>   select CRC32
>   imply PTP_1588_CLOCK
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h 
> b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
> index 513f4e2df5f6..5cd966c154f3 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
> @@ -20,6 +20,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  struct stmmac_resources {
>   void __iomem *addr;
> @@ -54,14 +55,19 @@ struct stmmac_tx_queue {
>   u32 mss;
>  };
>  
> +struct stmmac_rx_buffer {
> + struct page *page;
> + dma_addr_t addr;
> +};
> +
>  struct stmmac_rx_queue {
>   u32 rx_count_frames;
>   u32 queue_index;
> + struct page_pool *page_pool;
> + struct stmmac_rx_buffer *buf_pool;
>   struct stmmac_priv *priv_data;
>   struct dma_extended_desc *dma_erx;
>   struct dma_desc *dma_rx cacheline_aligned_in_smp;
> - struct sk_buff **rx_skbuff;
> - dma_addr_t *rx_skbuff_dma;
>   unsigned int cur_rx;
>   unsigned int dirty_rx;
>   u32 rx_zeroc_thresh;
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c 
> b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index c8fe85ef9a7e..9f44e8193208 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -1197,26 +1197,14 @@ static int stmmac_init_rx_buffers(struct stmmac_priv 
> *priv, struct dma_desc *p,
> int i, gfp_t flags, u32 queue)
>  {
>   struct stmmac_rx_queue *rx_q = >rx_queue[queue];
> - struct sk_buff *skb;
> + struct stmmac_rx_buffer *buf = _q->buf_pool[i];
>  
> - skb = __netdev_alloc_skb_ip_align(priv->dev, priv->dma_buf_sz, flags);
> - if (!skb) {
> - netdev_err(priv->dev,
> -"%s: Rx init fails; skb is NULL\n", __func__);
> + buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
> + if (!buf->page)
>   return -ENOMEM;
> - }
> - rx_q->rx_skbuff[i] = skb;
> - rx_q->rx_skbuff_dma[i] = dma_map_single(priv->device, skb->data,
> - priv->dma_buf_sz,
> - DMA_FROM_DEVICE);
> - if (dma_mapping_error(priv->device, rx_q->rx_skbuff_dma[i])) {
> - netdev_err(priv->dev, "%s: DMA mapping error\n", __func__);
> - dev_kfree_skb_any(skb);
> - return -EINVAL;
> - }
> -
> - stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[i]);
>  
> + buf->addr = buf->page->dma_addr;
> + stmmac_set_desc_addr(priv, p, buf->addr);
>   if (priv->dma_buf_sz == BUF_SIZE_16KiB)
>   stmmac_init_desc3(priv, p);
>  
> @@ -1232,13 +1220,10 @@ static int stmmac_init_rx_buffers(struct stmmac_priv 
> *priv, struct dma_desc *p,
>  static void stmmac_free_rx_buffer(struct stmmac_priv *priv, u32 queue, int i)
>  {
>   struct stmmac_rx_queue *rx_q = >rx_queue[queue];
> + struct stmmac_rx_buffer *buf = _q->buf_pool[i];
>  
> - if (rx_q->rx_skbuff[i]) {
> - dma_unmap_single(priv->device, rx_q->rx_skbuff_dma[i],
> -  priv->dma_buf_sz, DMA_FROM_DEVICE);
> - dev_kfree_skb_any(rx_q->rx_skbuff[i]);
> - }
> - rx_q->rx_skbuff[i] = NULL;
> + page_pool_put_page(rx_q->page_pool, buf->page, false);
> + buf->page = NULL;
>  }
>  
>  /**
> @@ -1321,10 +1306,6 @@ static int init_dma_rx_desc_rings(struct net_device 
> *dev, gfp_t flags)
>queue);
>