Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-10 Thread Liviu Dudau
On Wed, Apr 10, 2019 at 03:23:39PM +0530, Vidya Sagar wrote:
> On 4/10/2019 1:44 PM, Liviu Dudau wrote:
> > On Wed, Apr 10, 2019 at 11:40:40AM +0530, Vidya Sagar wrote:
> > > On 4/9/2019 6:56 PM, Bjorn Helgaas wrote:
> > > > On Tue, Apr 09, 2019 at 05:00:53PM +0530, Vidya Sagar wrote:
> > > > > On 4/6/2019 12:28 AM, Bjorn Helgaas wrote:
> > > > > > On Fri, Apr 05, 2019 at 01:23:51AM +0530, Vidya Sagar wrote:
> > > > > > > On 4/3/2019 11:06 PM, Bjorn Helgaas wrote:
> > > > > > > > On Wed, Apr 03, 2019 at 03:13:09PM +0530, Vidya Sagar wrote:
> > > > > > > > > On 4/3/2019 12:01 AM, Bjorn Helgaas wrote:
> > > > > > > > > > On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:
> > > > > > > > > > > On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:
> > > > > > > > > > > > On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar 
> > > > > > > > > > > > wrote:
> > > > > > > > > > > > > Add support for Synopsys DesignWare core IP based 
> > > > > > > > > > > > > PCIe host controller
> > > > > > > > > > > > > present in Tegra194 SoC.
> > > > > > > > > > 
> > > > > > > > > >- Why does this chip require pcie_pme_disable_msi()? 
> > > > > > > > > >  The only other
> > > > > > > > > >  use is a DMI quirk for "MSI Wind U-100", added by 
> > > > > > > > > > c39fae1416d5
> > > > > > > > > >  ("PCI PM: Make it possible to force using INTx for 
> > > > > > > > > > PCIe PME
> > > > > > > > > >  signaling").
> > > > > > > > > 
> > > > > > > > > Because Tegra194 doesn't support raising PME interrupts 
> > > > > > > > > through MSI line.
> > > > 
> > > > > > There's something wrong here.  Either the question of how PME is
> > > > > > signaled is generic and the spec provides a way for the OS to 
> > > > > > discover
> > > > > > that method, or it's part of the device-specific architecture that
> > > > > > each host bridge driver has to know about its device.  If the 
> > > > > > former,
> > > > > > we need to make the PCI core smart enough to figure it out.  If the
> > > > > > latter, we need a better interface than this ad hoc
> > > > > > pcie_pme_disable_msi() thing.  But if it is truly the latter, your
> > > > > > current code is sufficient and we can refine it over time.
> > > > > 
> > > > > In case of Tegra194, it is the latter case.
> > > > 
> > > > This isn't a Tegra194 question; it's a question of whether this
> > > > behavior is covered by the PCIe spec.
> > > AFAIU the spec and what I heard from Nvidia hardware folks is that spec 
> > > doesn't
> > > explicitly talk about this and it was a design choice made by Nvidia 
> > > hardware
> > > folks to route these interrupts through legacy line instead of MSI line.
> > > 
> > > > 
> > > > > > What I suspect should happen eventually is the DWC driver should 
> > > > > > call
> > > > > > devm_pci_alloc_host_bridge() directly, as all the non-DWC drivers 
> > > > > > do.
> > > > > > That would require a little reorganization of the DWC data 
> > > > > > structures,
> > > > > > but it would be good to be more consistent.
> > > > > > 
> > > > > > For now, I think stashing the pointer in pcie_port or dw_pcie would 
> > > > > > be
> > > > > > OK.  I'm not 100% clear on the difference, but it looks like either
> > > > > > should be common across all the DWC drivers, which is what we want.
> > > > > 
> > > > > Since dw_pcie is common for both root port and end point mode 
> > > > > structures,
> > > > > I think it makes sense to keep the pointer in pcie_port as this 
> > > > > structure
> > > > > is specific to root port mode of operation.
> > > > > I'll make a note to reorganize code to have 
> > > > > devm_pci_alloc_host_bridge()
> > > > > used in the beginning itself to be inline with non-DWC 
> > > > > implementations.
> > > > > But, I'll take it up later (after I'm done with upstreaming current 
> > > > > series)
> > > > 
> > > > Fair enough.
> > > > 
> > > > > > > .remove() internally calls pm_runtime_put_sync() API which calls
> > > > > > > .runtime_suspend(). I made a new patch to add a host_deinit() call
> > > > > > > which make all these calls. Since host_init() is called from 
> > > > > > > inside
> > > > > > > .runtime_resume() of this driver, to be in sync, I'm now calling
> > > > > > > host_deinit() from inside .runtime_suspend() API.
> > > > > > 
> > > > > > I think this is wrong.  pci_stop_root_bus() will detach all the
> > > > > > drivers from all the devices.  We don't want to do that if we're
> > > > > > merely runtime suspending the host bridge, do we?
> > > > > 
> > > > > In the current driver, the scenarios in which .runtime_suspend() is 
> > > > > called
> > > > > are
> > > > > a) during .remove() call and
> > > > 
> > > > It makes sense that you should call pci_stop_root_bus() during
> > > > .remove(), i.e., when the host controller driver is being removed,
> > > > because the PCI bus will no longer be accessible.  I think you should
> > > > call it *directly* from tegra_pcie_dw_remove() because that will match
> > > > what other

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-10 Thread Vidya Sagar

On 4/10/2019 1:44 PM, Liviu Dudau wrote:

On Wed, Apr 10, 2019 at 11:40:40AM +0530, Vidya Sagar wrote:

On 4/9/2019 6:56 PM, Bjorn Helgaas wrote:

On Tue, Apr 09, 2019 at 05:00:53PM +0530, Vidya Sagar wrote:

On 4/6/2019 12:28 AM, Bjorn Helgaas wrote:

On Fri, Apr 05, 2019 at 01:23:51AM +0530, Vidya Sagar wrote:

On 4/3/2019 11:06 PM, Bjorn Helgaas wrote:

On Wed, Apr 03, 2019 at 03:13:09PM +0530, Vidya Sagar wrote:

On 4/3/2019 12:01 AM, Bjorn Helgaas wrote:

On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:

On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:

On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar wrote:

Add support for Synopsys DesignWare core IP based PCIe host controller
present in Tegra194 SoC.


   - Why does this chip require pcie_pme_disable_msi()?  The only other
 use is a DMI quirk for "MSI Wind U-100", added by c39fae1416d5
 ("PCI PM: Make it possible to force using INTx for PCIe PME
 signaling").


Because Tegra194 doesn't support raising PME interrupts through MSI line.



There's something wrong here.  Either the question of how PME is
signaled is generic and the spec provides a way for the OS to discover
that method, or it's part of the device-specific architecture that
each host bridge driver has to know about its device.  If the former,
we need to make the PCI core smart enough to figure it out.  If the
latter, we need a better interface than this ad hoc
pcie_pme_disable_msi() thing.  But if it is truly the latter, your
current code is sufficient and we can refine it over time.


In case of Tegra194, it is the latter case.


This isn't a Tegra194 question; it's a question of whether this
behavior is covered by the PCIe spec.

AFAIU the spec and what I heard from Nvidia hardware folks is that spec doesn't
explicitly talk about this and it was a design choice made by Nvidia hardware
folks to route these interrupts through legacy line instead of MSI line.




What I suspect should happen eventually is the DWC driver should call
devm_pci_alloc_host_bridge() directly, as all the non-DWC drivers do.
That would require a little reorganization of the DWC data structures,
but it would be good to be more consistent.

For now, I think stashing the pointer in pcie_port or dw_pcie would be
OK.  I'm not 100% clear on the difference, but it looks like either
should be common across all the DWC drivers, which is what we want.


Since dw_pcie is common for both root port and end point mode structures,
I think it makes sense to keep the pointer in pcie_port as this structure
is specific to root port mode of operation.
I'll make a note to reorganize code to have devm_pci_alloc_host_bridge()
used in the beginning itself to be inline with non-DWC implementations.
But, I'll take it up later (after I'm done with upstreaming current series)


Fair enough.


.remove() internally calls pm_runtime_put_sync() API which calls
.runtime_suspend(). I made a new patch to add a host_deinit() call
which make all these calls. Since host_init() is called from inside
.runtime_resume() of this driver, to be in sync, I'm now calling
host_deinit() from inside .runtime_suspend() API.


I think this is wrong.  pci_stop_root_bus() will detach all the
drivers from all the devices.  We don't want to do that if we're
merely runtime suspending the host bridge, do we?


In the current driver, the scenarios in which .runtime_suspend() is called
are
a) during .remove() call and


It makes sense that you should call pci_stop_root_bus() during
.remove(), i.e., when the host controller driver is being removed,
because the PCI bus will no longer be accessible.  I think you should
call it *directly* from tegra_pcie_dw_remove() because that will match
what other drivers do.


b) when there is no endpoint found and controller would be shutdown
In both cases, it is required to stop the root bus and remove all devices,
so, instead of having same call present in respective paths, I kept them
in .runtime_suspend() itself to avoid code duplication.


I don't understand this part.  We should be able to runtime suspend
the host controller without detaching drivers for child devices.

If you shutdown the controller completely and detach the *host
controller driver*, sure, it makes sense to detach drivers from child
devices.  But that would be handled by the host controller .remove()
method, not by the runtime suspend method.

I think it is time I give some background about why I chose to implement
.runtime_suspend() and .runtime_resume() APIs in the first place. We wanted to
powerdown PCIe controller if there is no link up (i.e. slot is open and no 
endpoint
devices are connected). We want to achieve this without returning a failure in
.probe() call. Given PCIe IP power partitioning is handled by generic power 
domain
framework, power partition gets unpowergated before .probe() gets called and 
gets
powergated either when a failure is returned in .probe() or when 
pm_runtime_put_sync()
is called. So, I cho

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-10 Thread Liviu Dudau
On Wed, Apr 10, 2019 at 11:40:40AM +0530, Vidya Sagar wrote:
> On 4/9/2019 6:56 PM, Bjorn Helgaas wrote:
> > On Tue, Apr 09, 2019 at 05:00:53PM +0530, Vidya Sagar wrote:
> > > On 4/6/2019 12:28 AM, Bjorn Helgaas wrote:
> > > > On Fri, Apr 05, 2019 at 01:23:51AM +0530, Vidya Sagar wrote:
> > > > > On 4/3/2019 11:06 PM, Bjorn Helgaas wrote:
> > > > > > On Wed, Apr 03, 2019 at 03:13:09PM +0530, Vidya Sagar wrote:
> > > > > > > On 4/3/2019 12:01 AM, Bjorn Helgaas wrote:
> > > > > > > > On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:
> > > > > > > > > On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:
> > > > > > > > > > On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar wrote:
> > > > > > > > > > > Add support for Synopsys DesignWare core IP based PCIe 
> > > > > > > > > > > host controller
> > > > > > > > > > > present in Tegra194 SoC.
> > > > > > > > 
> > > > > > > >   - Why does this chip require pcie_pme_disable_msi()?  The 
> > > > > > > > only other
> > > > > > > > use is a DMI quirk for "MSI Wind U-100", added by 
> > > > > > > > c39fae1416d5
> > > > > > > > ("PCI PM: Make it possible to force using INTx for PCIe 
> > > > > > > > PME
> > > > > > > > signaling").
> > > > > > > 
> > > > > > > Because Tegra194 doesn't support raising PME interrupts through 
> > > > > > > MSI line.
> > 
> > > > There's something wrong here.  Either the question of how PME is
> > > > signaled is generic and the spec provides a way for the OS to discover
> > > > that method, or it's part of the device-specific architecture that
> > > > each host bridge driver has to know about its device.  If the former,
> > > > we need to make the PCI core smart enough to figure it out.  If the
> > > > latter, we need a better interface than this ad hoc
> > > > pcie_pme_disable_msi() thing.  But if it is truly the latter, your
> > > > current code is sufficient and we can refine it over time.
> > > 
> > > In case of Tegra194, it is the latter case.
> > 
> > This isn't a Tegra194 question; it's a question of whether this
> > behavior is covered by the PCIe spec.
> AFAIU the spec and what I heard from Nvidia hardware folks is that spec 
> doesn't
> explicitly talk about this and it was a design choice made by Nvidia hardware
> folks to route these interrupts through legacy line instead of MSI line.
> 
> > 
> > > > What I suspect should happen eventually is the DWC driver should call
> > > > devm_pci_alloc_host_bridge() directly, as all the non-DWC drivers do.
> > > > That would require a little reorganization of the DWC data structures,
> > > > but it would be good to be more consistent.
> > > > 
> > > > For now, I think stashing the pointer in pcie_port or dw_pcie would be
> > > > OK.  I'm not 100% clear on the difference, but it looks like either
> > > > should be common across all the DWC drivers, which is what we want.
> > > 
> > > Since dw_pcie is common for both root port and end point mode structures,
> > > I think it makes sense to keep the pointer in pcie_port as this structure
> > > is specific to root port mode of operation.
> > > I'll make a note to reorganize code to have devm_pci_alloc_host_bridge()
> > > used in the beginning itself to be inline with non-DWC implementations.
> > > But, I'll take it up later (after I'm done with upstreaming current 
> > > series)
> > 
> > Fair enough.
> > 
> > > > > .remove() internally calls pm_runtime_put_sync() API which calls
> > > > > .runtime_suspend(). I made a new patch to add a host_deinit() call
> > > > > which make all these calls. Since host_init() is called from inside
> > > > > .runtime_resume() of this driver, to be in sync, I'm now calling
> > > > > host_deinit() from inside .runtime_suspend() API.
> > > > 
> > > > I think this is wrong.  pci_stop_root_bus() will detach all the
> > > > drivers from all the devices.  We don't want to do that if we're
> > > > merely runtime suspending the host bridge, do we?
> > > 
> > > In the current driver, the scenarios in which .runtime_suspend() is called
> > > are
> > > a) during .remove() call and
> > 
> > It makes sense that you should call pci_stop_root_bus() during
> > .remove(), i.e., when the host controller driver is being removed,
> > because the PCI bus will no longer be accessible.  I think you should
> > call it *directly* from tegra_pcie_dw_remove() because that will match
> > what other drivers do.
> > 
> > > b) when there is no endpoint found and controller would be shutdown
> > > In both cases, it is required to stop the root bus and remove all devices,
> > > so, instead of having same call present in respective paths, I kept them
> > > in .runtime_suspend() itself to avoid code duplication.
> > 
> > I don't understand this part.  We should be able to runtime suspend
> > the host controller without detaching drivers for child devices.
> > 
> > If you shutdown the controller completely and detach the *host
> > controller driver*, sure, it makes sense to detach drivers from child
> > devices.  

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-09 Thread Vidya Sagar

On 4/9/2019 6:56 PM, Bjorn Helgaas wrote:

On Tue, Apr 09, 2019 at 05:00:53PM +0530, Vidya Sagar wrote:

On 4/6/2019 12:28 AM, Bjorn Helgaas wrote:

On Fri, Apr 05, 2019 at 01:23:51AM +0530, Vidya Sagar wrote:

On 4/3/2019 11:06 PM, Bjorn Helgaas wrote:

On Wed, Apr 03, 2019 at 03:13:09PM +0530, Vidya Sagar wrote:

On 4/3/2019 12:01 AM, Bjorn Helgaas wrote:

On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:

On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:

On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar wrote:

Add support for Synopsys DesignWare core IP based PCIe host controller
present in Tegra194 SoC.


  - Why does this chip require pcie_pme_disable_msi()?  The only other
use is a DMI quirk for "MSI Wind U-100", added by c39fae1416d5
("PCI PM: Make it possible to force using INTx for PCIe PME
signaling").


Because Tegra194 doesn't support raising PME interrupts through MSI line.



There's something wrong here.  Either the question of how PME is
signaled is generic and the spec provides a way for the OS to discover
that method, or it's part of the device-specific architecture that
each host bridge driver has to know about its device.  If the former,
we need to make the PCI core smart enough to figure it out.  If the
latter, we need a better interface than this ad hoc
pcie_pme_disable_msi() thing.  But if it is truly the latter, your
current code is sufficient and we can refine it over time.


In case of Tegra194, it is the latter case.


This isn't a Tegra194 question; it's a question of whether this
behavior is covered by the PCIe spec.

AFAIU the spec and what I heard from Nvidia hardware folks is that spec doesn't
explicitly talk about this and it was a design choice made by Nvidia hardware
folks to route these interrupts through legacy line instead of MSI line.




What I suspect should happen eventually is the DWC driver should call
devm_pci_alloc_host_bridge() directly, as all the non-DWC drivers do.
That would require a little reorganization of the DWC data structures,
but it would be good to be more consistent.

For now, I think stashing the pointer in pcie_port or dw_pcie would be
OK.  I'm not 100% clear on the difference, but it looks like either
should be common across all the DWC drivers, which is what we want.


Since dw_pcie is common for both root port and end point mode structures,
I think it makes sense to keep the pointer in pcie_port as this structure
is specific to root port mode of operation.
I'll make a note to reorganize code to have devm_pci_alloc_host_bridge()
used in the beginning itself to be inline with non-DWC implementations.
But, I'll take it up later (after I'm done with upstreaming current series)


Fair enough.


.remove() internally calls pm_runtime_put_sync() API which calls
.runtime_suspend(). I made a new patch to add a host_deinit() call
which make all these calls. Since host_init() is called from inside
.runtime_resume() of this driver, to be in sync, I'm now calling
host_deinit() from inside .runtime_suspend() API.


I think this is wrong.  pci_stop_root_bus() will detach all the
drivers from all the devices.  We don't want to do that if we're
merely runtime suspending the host bridge, do we?


In the current driver, the scenarios in which .runtime_suspend() is called
are
a) during .remove() call and


It makes sense that you should call pci_stop_root_bus() during
.remove(), i.e., when the host controller driver is being removed,
because the PCI bus will no longer be accessible.  I think you should
call it *directly* from tegra_pcie_dw_remove() because that will match
what other drivers do.


b) when there is no endpoint found and controller would be shutdown
In both cases, it is required to stop the root bus and remove all devices,
so, instead of having same call present in respective paths, I kept them
in .runtime_suspend() itself to avoid code duplication.


I don't understand this part.  We should be able to runtime suspend
the host controller without detaching drivers for child devices.

If you shutdown the controller completely and detach the *host
controller driver*, sure, it makes sense to detach drivers from child
devices.  But that would be handled by the host controller .remove()
method, not by the runtime suspend method.

I think it is time I give some background about why I chose to implement
.runtime_suspend() and .runtime_resume() APIs in the first place. We wanted to
powerdown PCIe controller if there is no link up (i.e. slot is open and no 
endpoint
devices are connected). We want to achieve this without returning a failure in
.probe() call. Given PCIe IP power partitioning is handled by generic power 
domain
framework, power partition gets unpowergated before .probe() gets called and 
gets
powergated either when a failure is returned in .probe() or when 
pm_runtime_put_sync()
is called. So, I chose to call pm_runtime_put_sync() in no-link-up scenario and 
chose
to implement .runtime_suspend() to handle

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-09 Thread Bjorn Helgaas
On Tue, Apr 09, 2019 at 05:00:53PM +0530, Vidya Sagar wrote:
> On 4/6/2019 12:28 AM, Bjorn Helgaas wrote:
> > On Fri, Apr 05, 2019 at 01:23:51AM +0530, Vidya Sagar wrote:
> > > On 4/3/2019 11:06 PM, Bjorn Helgaas wrote:
> > > > On Wed, Apr 03, 2019 at 03:13:09PM +0530, Vidya Sagar wrote:
> > > > > On 4/3/2019 12:01 AM, Bjorn Helgaas wrote:
> > > > > > On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:
> > > > > > > On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:
> > > > > > > > On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar wrote:
> > > > > > > > > Add support for Synopsys DesignWare core IP based PCIe host 
> > > > > > > > > controller
> > > > > > > > > present in Tegra194 SoC.
> > > > > > 
> > > > > >  - Why does this chip require pcie_pme_disable_msi()?  The only 
> > > > > > other
> > > > > >use is a DMI quirk for "MSI Wind U-100", added by 
> > > > > > c39fae1416d5
> > > > > >("PCI PM: Make it possible to force using INTx for PCIe PME
> > > > > >signaling").
> > > > > 
> > > > > Because Tegra194 doesn't support raising PME interrupts through MSI 
> > > > > line.

> > There's something wrong here.  Either the question of how PME is
> > signaled is generic and the spec provides a way for the OS to discover
> > that method, or it's part of the device-specific architecture that
> > each host bridge driver has to know about its device.  If the former,
> > we need to make the PCI core smart enough to figure it out.  If the
> > latter, we need a better interface than this ad hoc
> > pcie_pme_disable_msi() thing.  But if it is truly the latter, your
> > current code is sufficient and we can refine it over time.
>
> In case of Tegra194, it is the latter case.

This isn't a Tegra194 question; it's a question of whether this
behavior is covered by the PCIe spec.

> > What I suspect should happen eventually is the DWC driver should call
> > devm_pci_alloc_host_bridge() directly, as all the non-DWC drivers do.
> > That would require a little reorganization of the DWC data structures,
> > but it would be good to be more consistent.
> > 
> > For now, I think stashing the pointer in pcie_port or dw_pcie would be
> > OK.  I'm not 100% clear on the difference, but it looks like either
> > should be common across all the DWC drivers, which is what we want.
>
> Since dw_pcie is common for both root port and end point mode structures,
> I think it makes sense to keep the pointer in pcie_port as this structure
> is specific to root port mode of operation.
> I'll make a note to reorganize code to have devm_pci_alloc_host_bridge()
> used in the beginning itself to be inline with non-DWC implementations.
> But, I'll take it up later (after I'm done with upstreaming current series)

Fair enough.

> > > .remove() internally calls pm_runtime_put_sync() API which calls
> > > .runtime_suspend(). I made a new patch to add a host_deinit() call
> > > which make all these calls. Since host_init() is called from inside
> > > .runtime_resume() of this driver, to be in sync, I'm now calling
> > > host_deinit() from inside .runtime_suspend() API.
> > 
> > I think this is wrong.  pci_stop_root_bus() will detach all the
> > drivers from all the devices.  We don't want to do that if we're
> > merely runtime suspending the host bridge, do we?
>
> In the current driver, the scenarios in which .runtime_suspend() is called
> are
> a) during .remove() call and

It makes sense that you should call pci_stop_root_bus() during
.remove(), i.e., when the host controller driver is being removed,
because the PCI bus will no longer be accessible.  I think you should
call it *directly* from tegra_pcie_dw_remove() because that will match
what other drivers do.

> b) when there is no endpoint found and controller would be shutdown
> In both cases, it is required to stop the root bus and remove all devices,
> so, instead of having same call present in respective paths, I kept them
> in .runtime_suspend() itself to avoid code duplication.

I don't understand this part.  We should be able to runtime suspend
the host controller without detaching drivers for child devices.

If you shutdown the controller completely and detach the *host
controller driver*, sure, it makes sense to detach drivers from child
devices.  But that would be handled by the host controller .remove()
method, not by the runtime suspend method.

Bjorn


Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-09 Thread Vidya Sagar

On 4/6/2019 12:28 AM, Bjorn Helgaas wrote:

On Fri, Apr 05, 2019 at 01:23:51AM +0530, Vidya Sagar wrote:

On 4/3/2019 11:06 PM, Bjorn Helgaas wrote:

On Wed, Apr 03, 2019 at 03:13:09PM +0530, Vidya Sagar wrote:

On 4/3/2019 12:01 AM, Bjorn Helgaas wrote:

On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:

On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:

On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar wrote:

Add support for Synopsys DesignWare core IP based PCIe host controller
present in Tegra194 SoC.


 - Why does this chip require pcie_pme_disable_msi()?  The only other
   use is a DMI quirk for "MSI Wind U-100", added by c39fae1416d5
   ("PCI PM: Make it possible to force using INTx for PCIe PME
   signaling").


Because Tegra194 doesn't support raising PME interrupts through MSI line.


What does the spec say about this?  Is hardware supposed to
support MSI for PME?  Given that MSI Wind U-100 and Tegra194 are
the only two cases we know about where PME via MSI isn't
supported, it seems like there must be either a requirement for
that or some mechanism for the OS to figure this out, e.g., a
capability bit.


AFAIU, Spec doesn't say anything about whether PME interrupt should
be through MSI or by other means. As far as Tegra194 is concerned,
there are only two interrupt lanes that go from PCIe IP to GIC, one
being legacy interrupt (that represents legacy interrupts coming
over PCIe bus from downstream devices and also the events happening
internal to root port) and the other being MSI interrupt (that
represents MSI interrupts coming over PCIe bus from downstream
devices). Since hardware folks had a choice to route PME interrupts
either through legacy interrupt line or through MSI interrupt line
and legacy interrupt line was chosen as a design choice. That being
the case at hardware level, I tried to inform the same through
pcie_pme_disable_msi() to PCIe sub-system that PME interrupts are
not expected over MSI.


There's something wrong here.  Either the question of how PME is
signaled is generic and the spec provides a way for the OS to discover
that method, or it's part of the device-specific architecture that
each host bridge driver has to know about its device.  If the former,
we need to make the PCI core smart enough to figure it out.  If the
latter, we need a better interface than this ad hoc
pcie_pme_disable_msi() thing.  But if it is truly the latter, your
current code is sufficient and we can refine it over time.

In case of Tegra194, it is the latter case.




I see that an earlier patch added "bus" to struct pcie_port.
I think it would be better to somehow connect to the
pci_host_bridge struct.  Several other drivers already do
this; see uses of pci_host_bridge_from_priv().


All non-DesignWare based implementations save their private data
structure in 'private' pointer of struct pci_host_bridge and use
pci_host_bridge_from_priv() to get it back. But, DesignWare
based implementations save pcie_port in 'sysdata' and nothing in
'private' pointer. So,  I'm not sure if
pci_host_bridge_from_priv() can be used in this case. Please do
let me know if you think otherwise.


DesignWare-based drivers should have a way to retrieve the
pci_host_bridge pointer.  It doesn't have to be *exactly* the same
as non-DesignWare drivers, but it should be similar.


I gave my reasoning as to why with the current code, it is not
possible to get the pci_host_bridge structure pointer from struct
pcie_port pointer in another thread as a reply to Thierry Reding's
comments. Since Jishen'g changes to support remove functionality are
accepted, I think using bus pointer saved in struct pcie_port
pointer shouldn't be any issue now. Please do let me know if that is
something not acceptable.


That would give you the bus, as well as flags like
no_ext_tags, native_aer, etc, which this driver, being a host
bridge driver that's responsible for this part of the
firmware/OS interface, may conceivably need.


I think saving the pp->root_bus pointer as Jisheng's patch does is a
sub-optimal solution.  If we figure out how to save the
pci_host_bridge pointer, we automatically get the root bus pointer as
well.

It may require some restructuring to save the pci_host_bridge pointer,
but I doubt it's really *impossible*.


Is it OK to save pci_host_bridge pointer in pcie_port structure
directly? I see that as another way to get pci_host_bridge pointer
from pcie_port structure. My understanding is that, to get
pci_host_bridge pointer, either pcie_port structure should be part
of pci_host_bridge structure (which is the case with all non-DW
implementations) or pcie_port should have a pointer to some
structure which is directly (and not by means of a pointer) part of
pci_host_bridge structure so that container_of() can be used to get
pci_host_bridge.  As I see, there is no data object of
pci_host_bridge whose pointer is saved in pcie_port structure. In
fact, in reverse, pcie_port's struct dev pointer is saved as paren

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-05 Thread Bjorn Helgaas
On Fri, Apr 05, 2019 at 01:23:51AM +0530, Vidya Sagar wrote:
> On 4/3/2019 11:06 PM, Bjorn Helgaas wrote:
> > On Wed, Apr 03, 2019 at 03:13:09PM +0530, Vidya Sagar wrote:
> > > On 4/3/2019 12:01 AM, Bjorn Helgaas wrote:
> > > > On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:
> > > > > On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:
> > > > > > On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar wrote:
> > > > > > > Add support for Synopsys DesignWare core IP based PCIe host 
> > > > > > > controller
> > > > > > > present in Tegra194 SoC.
> > > > 
> > > > - Why does this chip require pcie_pme_disable_msi()?  The only other
> > > >   use is a DMI quirk for "MSI Wind U-100", added by c39fae1416d5
> > > >   ("PCI PM: Make it possible to force using INTx for PCIe PME
> > > >   signaling").
> > > 
> > > Because Tegra194 doesn't support raising PME interrupts through MSI line.
> > 
> > What does the spec say about this?  Is hardware supposed to
> > support MSI for PME?  Given that MSI Wind U-100 and Tegra194 are
> > the only two cases we know about where PME via MSI isn't
> > supported, it seems like there must be either a requirement for
> > that or some mechanism for the OS to figure this out, e.g., a
> > capability bit.
>
> AFAIU, Spec doesn't say anything about whether PME interrupt should
> be through MSI or by other means. As far as Tegra194 is concerned,
> there are only two interrupt lanes that go from PCIe IP to GIC, one
> being legacy interrupt (that represents legacy interrupts coming
> over PCIe bus from downstream devices and also the events happening
> internal to root port) and the other being MSI interrupt (that
> represents MSI interrupts coming over PCIe bus from downstream
> devices). Since hardware folks had a choice to route PME interrupts
> either through legacy interrupt line or through MSI interrupt line
> and legacy interrupt line was chosen as a design choice. That being
> the case at hardware level, I tried to inform the same through
> pcie_pme_disable_msi() to PCIe sub-system that PME interrupts are
> not expected over MSI.

There's something wrong here.  Either the question of how PME is
signaled is generic and the spec provides a way for the OS to discover
that method, or it's part of the device-specific architecture that
each host bridge driver has to know about its device.  If the former,
we need to make the PCI core smart enough to figure it out.  If the
latter, we need a better interface than this ad hoc
pcie_pme_disable_msi() thing.  But if it is truly the latter, your
current code is sufficient and we can refine it over time.

> > > > > > I see that an earlier patch added "bus" to struct pcie_port.
> > > > > > I think it would be better to somehow connect to the
> > > > > > pci_host_bridge struct.  Several other drivers already do
> > > > > > this; see uses of pci_host_bridge_from_priv().
> > > > > 
> > > > > All non-DesignWare based implementations save their private data
> > > > > structure in 'private' pointer of struct pci_host_bridge and use
> > > > > pci_host_bridge_from_priv() to get it back. But, DesignWare
> > > > > based implementations save pcie_port in 'sysdata' and nothing in
> > > > > 'private' pointer. So,  I'm not sure if
> > > > > pci_host_bridge_from_priv() can be used in this case. Please do
> > > > > let me know if you think otherwise.
> > > > 
> > > > DesignWare-based drivers should have a way to retrieve the
> > > > pci_host_bridge pointer.  It doesn't have to be *exactly* the same
> > > > as non-DesignWare drivers, but it should be similar.
> > > 
> > > I gave my reasoning as to why with the current code, it is not
> > > possible to get the pci_host_bridge structure pointer from struct
> > > pcie_port pointer in another thread as a reply to Thierry Reding's
> > > comments. Since Jishen'g changes to support remove functionality are
> > > accepted, I think using bus pointer saved in struct pcie_port
> > > pointer shouldn't be any issue now. Please do let me know if that is
> > > something not acceptable.
> > > 
> > > > > > That would give you the bus, as well as flags like
> > > > > > no_ext_tags, native_aer, etc, which this driver, being a host
> > > > > > bridge driver that's responsible for this part of the
> > > > > > firmware/OS interface, may conceivably need.
> > 
> > I think saving the pp->root_bus pointer as Jisheng's patch does is a
> > sub-optimal solution.  If we figure out how to save the
> > pci_host_bridge pointer, we automatically get the root bus pointer as
> > well.
> > 
> > It may require some restructuring to save the pci_host_bridge pointer,
> > but I doubt it's really *impossible*.
>
> Is it OK to save pci_host_bridge pointer in pcie_port structure
> directly? I see that as another way to get pci_host_bridge pointer
> from pcie_port structure. My understanding is that, to get
> pci_host_bridge pointer, either pcie_port structure should be part
> of pci_host_bridge structure (which is the case with all non-DW
>

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-04 Thread Vidya Sagar

On 4/3/2019 11:06 PM, Bjorn Helgaas wrote:

On Wed, Apr 03, 2019 at 03:13:09PM +0530, Vidya Sagar wrote:

On 4/3/2019 12:01 AM, Bjorn Helgaas wrote:

On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:

On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:

On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar wrote:

Add support for Synopsys DesignWare core IP based PCIe host controller
present in Tegra194 SoC.


- Why does this chip require pcie_pme_disable_msi()?  The only other
  use is a DMI quirk for "MSI Wind U-100", added by c39fae1416d5
  ("PCI PM: Make it possible to force using INTx for PCIe PME
  signaling").


Because Tegra194 doesn't support raising PME interrupts through MSI line.


What does the spec say about this?  Is hardware supposed to support
MSI for PME?  Given that MSI Wind U-100 and Tegra194 are the only two
cases we know about where PME via MSI isn't supported, it seems like
there must be either a requirement for that or some mechanism for the
OS to figure this out, e.g., a capability bit.

AFAIU, Spec doesn't say anything about whether PME interrupt should be through 
MSI
or by other means. As far as Tegra194 is concerned, there are only two
interrupt lanes that go from PCIe IP to GIC, one being legacy interrupt (that
represents legacy interrupts coming over PCIe bus from downstream devices and
also the events happening internal to root port) and the other being MSI
interrupt (that represents MSI interrupts coming over PCIe bus from downstream
devices). Since hardware folks had a choice to route PME interrupts either
through legacy interrupt line or through MSI interrupt line and legacy interrupt
line was chosen as a design choice. That being the case at hardware level, I 
tried
to inform the same through pcie_pme_disable_msi() to PCIe sub-system that
PME interrupts are not expected over MSI.




I see that an earlier patch added "bus" to struct pcie_port.
I think it would be better to somehow connect to the
pci_host_bridge struct.  Several other drivers already do
this; see uses of pci_host_bridge_from_priv().


All non-DesignWare based implementations save their private data
structure in 'private' pointer of struct pci_host_bridge and use
pci_host_bridge_from_priv() to get it back. But, DesignWare
based implementations save pcie_port in 'sysdata' and nothing in
'private' pointer. So,  I'm not sure if
pci_host_bridge_from_priv() can be used in this case. Please do
let me know if you think otherwise.


DesignWare-based drivers should have a way to retrieve the
pci_host_bridge pointer.  It doesn't have to be *exactly* the same
as non-DesignWare drivers, but it should be similar.


I gave my reasoning as to why with the current code, it is not
possible to get the pci_host_bridge structure pointer from struct
pcie_port pointer in another thread as a reply to Thierry Reding's
comments. Since Jishen'g changes to support remove functionality are
accepted, I think using bus pointer saved in struct pcie_port
pointer shouldn't be any issue now. Please do let me know if that is
something not acceptable.


That would give you the bus, as well as flags like
no_ext_tags, native_aer, etc, which this driver, being a host
bridge driver that's responsible for this part of the
firmware/OS interface, may conceivably need.


I think saving the pp->root_bus pointer as Jisheng's patch does is a
sub-optimal solution.  If we figure out how to save the
pci_host_bridge pointer, we automatically get the root bus pointer as
well.

It may require some restructuring to save the pci_host_bridge pointer,
but I doubt it's really *impossible*.

Is it OK to save pci_host_bridge pointer in pcie_port structure directly? I see
that as another way to get pci_host_bridge pointer from pcie_port
structure. My understanding is that, to get pci_host_bridge pointer, either
pcie_port structure should be part of pci_host_bridge structure (which is the
case with all non-DW implementations) or pcie_port should have a pointer to
some structure which is directly (and not by means of a pointer) part of
pci_host_bridge structure so that container_of() can be used to get 
pci_host_bridge.
As I see, there is no data object of pci_host_bridge whose pointer is saved in
pcie_port structure. In fact, in reverse, pcie_port's struct dev pointer is 
saved
as parent to pci_host_bridge's struct dev. So, another way would be to iterate 
over
the children of pcie_port's struct dev pointer to find pci_host_bridge's dev 
pointer
and from there get pci_host_bridge through container_of. But, I think is 
complicating
it more than using bus pointer from pcie_port. I'm not sure if I'm able to 
convey the
issue I'm facing here to get pci_host_bridge from pcie_port, but doing any 
other thing seems complicating it even more.




+static int tegra_pcie_dw_runtime_suspend(struct device *dev)
+{
+   struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
+
+   tegra_pcie_downstream_dev_to_D0(pcie);
+
+   pci_stop_root_bus(pcie->pci.pp

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-03 Thread Bjorn Helgaas
On Wed, Apr 03, 2019 at 03:13:09PM +0530, Vidya Sagar wrote:
> On 4/3/2019 12:01 AM, Bjorn Helgaas wrote:
> > On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:
> > > On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:
> > > > On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar wrote:
> > > > > Add support for Synopsys DesignWare core IP based PCIe host controller
> > > > > present in Tegra194 SoC.
> > 
> >- Why does this chip require pcie_pme_disable_msi()?  The only other
> >  use is a DMI quirk for "MSI Wind U-100", added by c39fae1416d5
> >  ("PCI PM: Make it possible to force using INTx for PCIe PME
> >  signaling").
>
> Because Tegra194 doesn't support raising PME interrupts through MSI line.

What does the spec say about this?  Is hardware supposed to support
MSI for PME?  Given that MSI Wind U-100 and Tegra194 are the only two
cases we know about where PME via MSI isn't supported, it seems like
there must be either a requirement for that or some mechanism for the
OS to figure this out, e.g., a capability bit.

> > > > I see that an earlier patch added "bus" to struct pcie_port.
> > > > I think it would be better to somehow connect to the
> > > > pci_host_bridge struct.  Several other drivers already do
> > > > this; see uses of pci_host_bridge_from_priv().
> > >
> > > All non-DesignWare based implementations save their private data
> > > structure in 'private' pointer of struct pci_host_bridge and use
> > > pci_host_bridge_from_priv() to get it back. But, DesignWare
> > > based implementations save pcie_port in 'sysdata' and nothing in
> > > 'private' pointer. So,  I'm not sure if
> > > pci_host_bridge_from_priv() can be used in this case. Please do
> > > let me know if you think otherwise.
> > 
> > DesignWare-based drivers should have a way to retrieve the
> > pci_host_bridge pointer.  It doesn't have to be *exactly* the same
> > as non-DesignWare drivers, but it should be similar.
>
> I gave my reasoning as to why with the current code, it is not
> possible to get the pci_host_bridge structure pointer from struct
> pcie_port pointer in another thread as a reply to Thierry Reding's
> comments. Since Jishen'g changes to support remove functionality are
> accepted, I think using bus pointer saved in struct pcie_port
> pointer shouldn't be any issue now. Please do let me know if that is
> something not acceptable.
>
> > > > That would give you the bus, as well as flags like
> > > > no_ext_tags, native_aer, etc, which this driver, being a host
> > > > bridge driver that's responsible for this part of the
> > > > firmware/OS interface, may conceivably need.

I think saving the pp->root_bus pointer as Jisheng's patch does is a
sub-optimal solution.  If we figure out how to save the
pci_host_bridge pointer, we automatically get the root bus pointer as
well.

It may require some restructuring to save the pci_host_bridge pointer,
but I doubt it's really *impossible*.

> > > > > +static int tegra_pcie_dw_runtime_suspend(struct device *dev)
> > > > > +{
> > > > > + struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
> > > > > +
> > > > > + tegra_pcie_downstream_dev_to_D0(pcie);
> > > > > +
> > > > > + pci_stop_root_bus(pcie->pci.pp.bus);
> > > > > + pci_remove_root_bus(pcie->pci.pp.bus);
> > > > 
> > > > Why are you calling these?  No other drivers do this except in
> > > > their .remove() methods.  Is there something special about
> > > > Tegra, or is this something the other drivers *should* be
> > > > doing?
> > >
> > > Since this API is called by remove, I'm removing the hierarchy
> > > to safely bring down all the devices. I'll have to re-visit this
> > > part as Jisheng Zhang's patches
> > > https://patchwork.kernel.org/project/linux-pci/list/?series=98559
> > > are now approved and I need to verify this part after
> > > cherry-picking Jisheng's changes.
> > 
> > Tegra194 should do this the same way as other drivers, independent
> > of Jisheng's changes.
>
> When other Designware implementations add remove functionality, even
> they should be calling these APIs (Jisheng also mentioned the same
> in his commit message)

My point is that these APIs should be called from driver .remove()
methods, not from .runtime_suspend() methods.

Bjorn


Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-03 Thread Vidya Sagar

On 4/3/2019 12:01 AM, Bjorn Helgaas wrote:

On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:

On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:

On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar wrote:

Add support for Synopsys DesignWare core IP based PCIe host controller
present in Tegra194 SoC.



+#include "../../pcie/portdrv.h"


What's this for?  I didn't see any obvious uses of things from
portdrv.h, but I didn't actually try to build without it.

This is for pcie_pme_disable_msi() API. Since this is defined in portdrv.h
file, I'm including it here.


Hmm, OK, I missed that.  If you need pcie_pme_disable_msi(), you
definitely need portdrv.h.  But you're still a singleton in terms of
including it, so it leads to follow-up questions:

   - Why does this chip require pcie_pme_disable_msi()?  The only other
 use is a DMI quirk for "MSI Wind U-100", added by c39fae1416d5
 ("PCI PM: Make it possible to force using INTx for PCIe PME
 signaling").

Because Tegra194 doesn't support raising PME interrupts through MSI line.


   - Is this a workaround for a Tegra194 defect?  If so, it should be
 separated out and identified as such.  Otherwise it will get
 copied to other places where it doesn't belong.

This is a guideline from the hardware team that MSI for PME should be disabled.
I'll make an explicit comment in the code that it is specific to Tegra194.


   - You only call it from the .runtime_resume() hook.  That doesn't
 make sense to me: if you never suspend, you never disable MSI for
 PME signaling.

.runtime_resume() not only gets called during resume, but also in normal path
as we are using PM framework and pm_runtime_get_sync() gets called finally
in the probe which executes .runtime_resume() hook.




+   val_w = dw_pcie_readw_dbi(pci, CFG_LINK_STATUS);
+   while (!(val_w & PCI_EXP_LNKSTA_DLLLA)) {
+   if (!count) {
+   val = readl(pcie->appl_base + APPL_DEBUG);
+   val &= APPL_DEBUG_LTSSM_STATE_MASK;
+   val >>= APPL_DEBUG_LTSSM_STATE_SHIFT;
+   tmp = readl(pcie->appl_base + APPL_LINK_STATUS);
+   tmp &= APPL_LINK_STATUS_RDLH_LINK_UP;
+   if (val == 0x11 && !tmp) {
+   dev_info(pci->dev, "link is down in DLL");
+   dev_info(pci->dev,
+"trying again with DLFE disabled\n");
+   /* disable LTSSM */
+   val = readl(pcie->appl_base + APPL_CTRL);
+   val &= ~APPL_CTRL_LTSSM_EN;
+   writel(val, pcie->appl_base + APPL_CTRL);
+
+   reset_control_assert(pcie->core_rst);
+   reset_control_deassert(pcie->core_rst);
+
+   offset =
+   dw_pcie_find_ext_capability(pci,
+   PCI_EXT_CAP_ID_DLF)
+   + PCI_DLF_CAP;


This capability offset doesn't change, does it?  Could it be computed
outside the loop?

This is the only place where DLF offset is needed and gets calculated and this
scenario is very rare as so far only a legacy ASMedia USB3.0 card requires DLF
to be disabled to get PCIe link up. So, I thought of calculating the offset
here itself instead of using a separate variable.


Hmm.  Sounds like this is essentially a quirk to work around some
hardware issue in Tegra194.

Is there some way you can pull this out into a separate function so it
doesn't clutter the normal path and it's more obvious that it's a
workaround?


+   val = dw_pcie_readl_dbi(pci, offset);
+   val &= ~DL_FEATURE_EXCHANGE_EN;
+   dw_pcie_writel_dbi(pci, offset, val);
+
+   tegra_pcie_dw_host_init(&pcie->pci.pp);


This looks like some sort of "wait for link up" retry loop, but a
recursive call seems a little unusual.  My 5 second analysis is that
the loop could run this 200 times, and you sure don't want the
possibility of a 200-deep call chain.  Is there way to split out the
host init from the link-up polling?

Again, this recursive calling comes into picture only for a legacy ASMedia
USB3.0 card and it is going to be a 1-deep call chain as the recursion takes
place only once depending on the condition. Apart from the legacy ASMedia card,
there is no other card at this point in time out of a huge number of cards that 
we have
tested.


We need to be able to analyze the code without spending time working
out what arcane PCIe spec details might limit this to fewer than 200
iterations, or relying on assumptions about how many cards have been
tested.

If you can restructure this so the "wait for link up" loop looks like
all the other drivers, and the DLF issue i

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-03 Thread Vidya Sagar

On 4/2/2019 7:44 PM, Thierry Reding wrote:

On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:

On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:

[...]

+static int tegra_pcie_dw_host_init(struct pcie_port *pp)
+{

[...]

+   val_w = dw_pcie_readw_dbi(pci, CFG_LINK_STATUS);
+   while (!(val_w & PCI_EXP_LNKSTA_DLLLA)) {
+   if (!count) {
+   val = readl(pcie->appl_base + APPL_DEBUG);
+   val &= APPL_DEBUG_LTSSM_STATE_MASK;
+   val >>= APPL_DEBUG_LTSSM_STATE_SHIFT;
+   tmp = readl(pcie->appl_base + APPL_LINK_STATUS);
+   tmp &= APPL_LINK_STATUS_RDLH_LINK_UP;
+   if (val == 0x11 && !tmp) {
+   dev_info(pci->dev, "link is down in DLL");
+   dev_info(pci->dev,
+"trying again with DLFE disabled\n");
+   /* disable LTSSM */
+   val = readl(pcie->appl_base + APPL_CTRL);
+   val &= ~APPL_CTRL_LTSSM_EN;
+   writel(val, pcie->appl_base + APPL_CTRL);
+
+   reset_control_assert(pcie->core_rst);
+   reset_control_deassert(pcie->core_rst);
+
+   offset =
+   dw_pcie_find_ext_capability(pci,
+   PCI_EXT_CAP_ID_DLF)
+   + PCI_DLF_CAP;


This capability offset doesn't change, does it?  Could it be computed
outside the loop?

This is the only place where DLF offset is needed and gets calculated and this
scenario is very rare as so far only a legacy ASMedia USB3.0 card requires DLF
to be disabled to get PCIe link up. So, I thought of calculating the offset
here itself instead of using a separate variable.




+   val = dw_pcie_readl_dbi(pci, offset);
+   val &= ~DL_FEATURE_EXCHANGE_EN;
+   dw_pcie_writel_dbi(pci, offset, val);
+
+   tegra_pcie_dw_host_init(&pcie->pci.pp);


This looks like some sort of "wait for link up" retry loop, but a
recursive call seems a little unusual.  My 5 second analysis is that
the loop could run this 200 times, and you sure don't want the
possibility of a 200-deep call chain.  Is there way to split out the
host init from the link-up polling?

Again, this recursive calling comes into picture only for a legacy ASMedia
USB3.0 card and it is going to be a 1-deep call chain as the recursion takes
place only once depending on the condition. Apart from the legacy ASMedia card,
there is no other card at this point in time out of a huge number of cards that 
we have
tested.


A more idiomatic way would be to add a "retry:" label somewhere and goto
that after disabling DLFE. That way you achieve the same effect, but you
can avoid the recursion, even if it is harmless in practice.

Initially I thought of using goto to keep it simple, but I thought it would be
discouraged and hence used recursion. But, yeah.. agree that goto would keep
it simple and I'll switch to goto now.




+static int tegra_pcie_dw_probe(struct platform_device *pdev)
+{
+   struct tegra_pcie_dw *pcie;
+   struct pcie_port *pp;
+   struct dw_pcie *pci;
+   struct phy **phy;
+   struct resource *dbi_res;
+   struct resource *atu_dma_res;
+   const struct of_device_id *match;
+   const struct tegra_pcie_of_data *data;
+   char *name;
+   int ret, i;
+
+   pcie = devm_kzalloc(&pdev->dev, sizeof(*pcie), GFP_KERNEL);
+   if (!pcie)
+   return -ENOMEM;
+
+   pci = &pcie->pci;
+   pci->dev = &pdev->dev;
+   pci->ops = &tegra_dw_pcie_ops;
+   pp = &pci->pp;
+   pcie->dev = &pdev->dev;
+
+   match = of_match_device(of_match_ptr(tegra_pcie_dw_of_match),
+   &pdev->dev);
+   if (!match)
+   return -EINVAL;


Logically could be the first thing in the function since it doesn't
depend on anything.

Done




+   data = (struct tegra_pcie_of_data *)match->data;


of_device_get_match_data() can help remove some of the above
boilerplate. Also, there's no reason to check for a failure with these
functions. The driver is OF-only and can only ever be probed if the
device exists, in which case match (or data for that matter) will never
be NULL.

Done.




I see that an earlier patch added "bus" to struct pcie_port.  I think
it would be better to somehow connect to the pci_host_bridge struct.
Several other drivers already do this; see uses of
pci_host_bridge_from_priv().

All non-DesignWare based implementations save their private data structure
in 'private' pointer of struct pci_host_bridge and use 
pci_host_bridge_from_priv()
to get it back. But, DesignWare based implementations s

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-02 Thread Bjorn Helgaas
On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:
> On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:
> > On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar wrote:
> > > Add support for Synopsys DesignWare core IP based PCIe host controller
> > > present in Tegra194 SoC.

> > > +#include "../../pcie/portdrv.h"
> > 
> > What's this for?  I didn't see any obvious uses of things from
> > portdrv.h, but I didn't actually try to build without it.
> This is for pcie_pme_disable_msi() API. Since this is defined in portdrv.h
> file, I'm including it here.

Hmm, OK, I missed that.  If you need pcie_pme_disable_msi(), you
definitely need portdrv.h.  But you're still a singleton in terms of
including it, so it leads to follow-up questions:

  - Why does this chip require pcie_pme_disable_msi()?  The only other
use is a DMI quirk for "MSI Wind U-100", added by c39fae1416d5
("PCI PM: Make it possible to force using INTx for PCIe PME
signaling").

  - Is this a workaround for a Tegra194 defect?  If so, it should be
separated out and identified as such.  Otherwise it will get
copied to other places where it doesn't belong.

  - You only call it from the .runtime_resume() hook.  That doesn't
make sense to me: if you never suspend, you never disable MSI for
PME signaling.

> > > + val_w = dw_pcie_readw_dbi(pci, CFG_LINK_STATUS);
> > > + while (!(val_w & PCI_EXP_LNKSTA_DLLLA)) {
> > > + if (!count) {
> > > + val = readl(pcie->appl_base + APPL_DEBUG);
> > > + val &= APPL_DEBUG_LTSSM_STATE_MASK;
> > > + val >>= APPL_DEBUG_LTSSM_STATE_SHIFT;
> > > + tmp = readl(pcie->appl_base + APPL_LINK_STATUS);
> > > + tmp &= APPL_LINK_STATUS_RDLH_LINK_UP;
> > > + if (val == 0x11 && !tmp) {
> > > + dev_info(pci->dev, "link is down in DLL");
> > > + dev_info(pci->dev,
> > > +  "trying again with DLFE disabled\n");
> > > + /* disable LTSSM */
> > > + val = readl(pcie->appl_base + APPL_CTRL);
> > > + val &= ~APPL_CTRL_LTSSM_EN;
> > > + writel(val, pcie->appl_base + APPL_CTRL);
> > > +
> > > + reset_control_assert(pcie->core_rst);
> > > + reset_control_deassert(pcie->core_rst);
> > > +
> > > + offset =
> > > + dw_pcie_find_ext_capability(pci,
> > > + PCI_EXT_CAP_ID_DLF)
> > > + + PCI_DLF_CAP;
> > 
> > This capability offset doesn't change, does it?  Could it be computed
> > outside the loop?
> This is the only place where DLF offset is needed and gets calculated and this
> scenario is very rare as so far only a legacy ASMedia USB3.0 card requires DLF
> to be disabled to get PCIe link up. So, I thought of calculating the offset
> here itself instead of using a separate variable.

Hmm.  Sounds like this is essentially a quirk to work around some
hardware issue in Tegra194.

Is there some way you can pull this out into a separate function so it
doesn't clutter the normal path and it's more obvious that it's a
workaround?

> > > + val = dw_pcie_readl_dbi(pci, offset);
> > > + val &= ~DL_FEATURE_EXCHANGE_EN;
> > > + dw_pcie_writel_dbi(pci, offset, val);
> > > +
> > > + tegra_pcie_dw_host_init(&pcie->pci.pp);
> > 
> > This looks like some sort of "wait for link up" retry loop, but a
> > recursive call seems a little unusual.  My 5 second analysis is that
> > the loop could run this 200 times, and you sure don't want the
> > possibility of a 200-deep call chain.  Is there way to split out the
> > host init from the link-up polling?
> Again, this recursive calling comes into picture only for a legacy ASMedia
> USB3.0 card and it is going to be a 1-deep call chain as the recursion takes
> place only once depending on the condition. Apart from the legacy ASMedia 
> card,
> there is no other card at this point in time out of a huge number of cards 
> that we have
> tested.

We need to be able to analyze the code without spending time working
out what arcane PCIe spec details might limit this to fewer than 200
iterations, or relying on assumptions about how many cards have been
tested.

If you can restructure this so the "wait for link up" loop looks like
all the other drivers, and the DLF issue is separated out and just
causes a retry of the "wait for link up", I think that will help a
lot.

> > > +static void tegra_pcie_dw_scan_bus(struct pcie_port *pp)
> > > +{
> > > + struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> > > + struct tegra_pcie_dw *pcie = dw_pcie_to_tegra_pcie(pci);
> > > + u32 speed;
> > > +
> > > + if (!tegra_pcie_dw_link_up(pci))
> > > + return;
> > > +
> > >

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-02 Thread Thierry Reding
On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote:
> On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:
[...]
> > > +static int tegra_pcie_dw_host_init(struct pcie_port *pp)
> > > +{
[...]
> > > + val_w = dw_pcie_readw_dbi(pci, CFG_LINK_STATUS);
> > > + while (!(val_w & PCI_EXP_LNKSTA_DLLLA)) {
> > > + if (!count) {
> > > + val = readl(pcie->appl_base + APPL_DEBUG);
> > > + val &= APPL_DEBUG_LTSSM_STATE_MASK;
> > > + val >>= APPL_DEBUG_LTSSM_STATE_SHIFT;
> > > + tmp = readl(pcie->appl_base + APPL_LINK_STATUS);
> > > + tmp &= APPL_LINK_STATUS_RDLH_LINK_UP;
> > > + if (val == 0x11 && !tmp) {
> > > + dev_info(pci->dev, "link is down in DLL");
> > > + dev_info(pci->dev,
> > > +  "trying again with DLFE disabled\n");
> > > + /* disable LTSSM */
> > > + val = readl(pcie->appl_base + APPL_CTRL);
> > > + val &= ~APPL_CTRL_LTSSM_EN;
> > > + writel(val, pcie->appl_base + APPL_CTRL);
> > > +
> > > + reset_control_assert(pcie->core_rst);
> > > + reset_control_deassert(pcie->core_rst);
> > > +
> > > + offset =
> > > + dw_pcie_find_ext_capability(pci,
> > > + PCI_EXT_CAP_ID_DLF)
> > > + + PCI_DLF_CAP;
> > 
> > This capability offset doesn't change, does it?  Could it be computed
> > outside the loop?
> This is the only place where DLF offset is needed and gets calculated and this
> scenario is very rare as so far only a legacy ASMedia USB3.0 card requires DLF
> to be disabled to get PCIe link up. So, I thought of calculating the offset
> here itself instead of using a separate variable.
> 
> > 
> > > + val = dw_pcie_readl_dbi(pci, offset);
> > > + val &= ~DL_FEATURE_EXCHANGE_EN;
> > > + dw_pcie_writel_dbi(pci, offset, val);
> > > +
> > > + tegra_pcie_dw_host_init(&pcie->pci.pp);
> > 
> > This looks like some sort of "wait for link up" retry loop, but a
> > recursive call seems a little unusual.  My 5 second analysis is that
> > the loop could run this 200 times, and you sure don't want the
> > possibility of a 200-deep call chain.  Is there way to split out the
> > host init from the link-up polling?
> Again, this recursive calling comes into picture only for a legacy ASMedia
> USB3.0 card and it is going to be a 1-deep call chain as the recursion takes
> place only once depending on the condition. Apart from the legacy ASMedia 
> card,
> there is no other card at this point in time out of a huge number of cards 
> that we have
> tested.

A more idiomatic way would be to add a "retry:" label somewhere and goto
that after disabling DLFE. That way you achieve the same effect, but you
can avoid the recursion, even if it is harmless in practice.

> > > +static int tegra_pcie_dw_probe(struct platform_device *pdev)
> > > +{
> > > + struct tegra_pcie_dw *pcie;
> > > + struct pcie_port *pp;
> > > + struct dw_pcie *pci;
> > > + struct phy **phy;
> > > + struct resource *dbi_res;
> > > + struct resource *atu_dma_res;
> > > + const struct of_device_id *match;
> > > + const struct tegra_pcie_of_data *data;
> > > + char *name;
> > > + int ret, i;
> > > +
> > > + pcie = devm_kzalloc(&pdev->dev, sizeof(*pcie), GFP_KERNEL);
> > > + if (!pcie)
> > > + return -ENOMEM;
> > > +
> > > + pci = &pcie->pci;
> > > + pci->dev = &pdev->dev;
> > > + pci->ops = &tegra_dw_pcie_ops;
> > > + pp = &pci->pp;
> > > + pcie->dev = &pdev->dev;
> > > +
> > > + match = of_match_device(of_match_ptr(tegra_pcie_dw_of_match),
> > > + &pdev->dev);
> > > + if (!match)
> > > + return -EINVAL;
> > 
> > Logically could be the first thing in the function since it doesn't
> > depend on anything.
> Done
> 
> > 
> > > + data = (struct tegra_pcie_of_data *)match->data;

of_device_get_match_data() can help remove some of the above
boilerplate. Also, there's no reason to check for a failure with these
functions. The driver is OF-only and can only ever be probed if the
device exists, in which case match (or data for that matter) will never
be NULL.

> > I see that an earlier patch added "bus" to struct pcie_port.  I think
> > it would be better to somehow connect to the pci_host_bridge struct.
> > Several other drivers already do this; see uses of
> > pci_host_bridge_from_priv().
> All non-DesignWare based implementations save their private data structure
> in 'private' pointer of struct pci_host_bridge and use 
> pci_host_bridge_from_priv()
> to get it back. But, DesignWare based implementations save pcie_port in 
> 'sysdata'
> and nothing in 'private' pointer. So,  I'm not sure if 
> pci_host_br

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-04-02 Thread Vidya Sagar

On 3/30/2019 2:22 AM, Bjorn Helgaas wrote:

Hi Vidya,

Wow, there's a lot of nice work here!  Thanks for that!

On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar wrote:

Add support for Synopsys DesignWare core IP based PCIe host controller
present in Tegra194 SoC.


General comments:

   - There are a few multi-line comments that don't match the
 prevailing style:

 /*
 * Text...
 */

   - Comments and dev_info()/dev_err() messages are inconsistent about
 starting with upper-case or lower-case letters.

   - Use "MSI", "IRQ", "PCIe", "CPU", etc in comments and messages.

   - There are a few functions that use "&pdev->dev" many times; can
 you add a "struct device *dev = &pdev->dev" to reduce the
 redundancy?

Done.




+#include "../../pcie/portdrv.h"


What's this for?  I didn't see any obvious uses of things from
portdrv.h, but I didn't actually try to build without it.

This is for pcie_pme_disable_msi() API. Since this is defined in portdrv.h
file, I'm including it here.




+struct tegra_pcie_dw {
+   struct device   *dev;
+   struct resource *appl_res;
+   struct resource *dbi_res;
+   struct resource *atu_dma_res;
+   void __iomem*appl_base;
+   struct clk  *core_clk;
+   struct reset_control*core_apb_rst;
+   struct reset_control*core_rst;
+   struct dw_pcie  pci;
+   enum dw_pcie_device_mode mode;
+
+   bool disable_clock_request;
+   bool power_down_en;
+   u8 init_link_width;
+   bool link_state;
+   u32 msi_ctrl_int;
+   u32 num_lanes;
+   u32 max_speed;
+   u32 init_speed;
+   bool cdm_check;
+   u32 cid;
+   int pex_wake;
+   bool update_fc_fixup;
+   int n_gpios;
+   int *gpios;
+#if defined(CONFIG_PCIEASPM)
+   u32 cfg_link_cap_l1sub;
+   u32 event_cntr_ctrl;
+   u32 event_cntr_data;
+   u32 aspm_cmrt;
+   u32 aspm_pwr_on_t;
+   u32 aspm_l0s_enter_lat;
+   u32 disabled_aspm_states;
+#endif


The above could be indented the same as the rest of the struct?

Done.




+   struct regulator*pex_ctl_reg;
+
+   int phy_count;
+   struct phy  **phy;
+
+   struct dentry   *debugfs;
+};



+static void apply_bad_link_workaround(struct pcie_port *pp)
+{
+   struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+   struct tegra_pcie_dw *pcie = dw_pcie_to_tegra_pcie(pci);
+   u16 val;
+
+   /*
+* NOTE:- Since this scenario is uncommon and link as
+* such is not stable anyway, not waiting to confirm
+* if link is really transiting to Gen-2 speed


s/transiting/transitioning/

I think there are other uses of "transit" when you mean "transition".

Done.




+static int tegra_pcie_dw_rd_own_conf(struct pcie_port *pp, int where, int size,
+u32 *val)
+{
+   struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+
+   /*
+* This is an endpoint mode specific register happen to appear even
+* when controller is operating in root port mode and system hangs
+* when it is accessed with link being in ASPM-L1 state.
+* So skip accessing it altogether
+*/
+   if (where == PORT_LOGIC_MSIX_DOORBELL) {
+   *val = 0x;
+   return PCIBIOS_SUCCESSFUL;
+   } else {
+   return dw_pcie_read(pci->dbi_base + where, size, val);
+   }
+}
+
+static int tegra_pcie_dw_wr_own_conf(struct pcie_port *pp, int where, int size,
+u32 val)
+{
+   struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+
+   /* This is EP specific register and system hangs when it is
+* accessed with link being in ASPM-L1 state.
+* So skip accessing it altogether
+*/
+   if (where == PORT_LOGIC_MSIX_DOORBELL)
+   return PCIBIOS_SUCCESSFUL;
+   else
+   return dw_pcie_write(pci->dbi_base + where, size, val);


These two functions are almost identical and they could look more
similar.  This one has the wrong multi-line comment style, uses "EP"
instead of "endpoint", etc.  Use this style for the "if" since the
first case is really an error case:

   if (where == PORT_LOGIC_MSIX_DOORBELL) {
 ...
 return ...;
   }

   return dw_pcie_...();

Done.




+static int tegra_pcie_dw_host_init(struct pcie_port *pp)
+{
+   struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+   struct tegra_pcie_dw *pcie = dw_pcie_to_tegra_pcie(pci);
+   int count = 200;
+   u32 val, tmp, offset;
+   u16 val_w;
+
+#if defined(CONFIG_PCIEASPM)
+   pcie->cfg_link_cap_l1sub =
+   dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS) +
+   PCI_L1SS_CAP;
+#endif
+   val = dw_pcie_readl_dbi(pci, PCI_IO_BASE);
+   val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8);
+   dw_pcie_writel_d

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-03-29 Thread Bjorn Helgaas
Hi Vidya,

Wow, there's a lot of nice work here!  Thanks for that!

On Tue, Mar 26, 2019 at 08:43:26PM +0530, Vidya Sagar wrote:
> Add support for Synopsys DesignWare core IP based PCIe host controller
> present in Tegra194 SoC.

General comments:

  - There are a few multi-line comments that don't match the
prevailing style:

/*
 * Text...
 */

  - Comments and dev_info()/dev_err() messages are inconsistent about
starting with upper-case or lower-case letters.

  - Use "MSI", "IRQ", "PCIe", "CPU", etc in comments and messages.

  - There are a few functions that use "&pdev->dev" many times; can
you add a "struct device *dev = &pdev->dev" to reduce the
redundancy?

> +#include "../../pcie/portdrv.h"

What's this for?  I didn't see any obvious uses of things from
portdrv.h, but I didn't actually try to build without it.

> +struct tegra_pcie_dw {
> + struct device   *dev;
> + struct resource *appl_res;
> + struct resource *dbi_res;
> + struct resource *atu_dma_res;
> + void __iomem*appl_base;
> + struct clk  *core_clk;
> + struct reset_control*core_apb_rst;
> + struct reset_control*core_rst;
> + struct dw_pcie  pci;
> + enum dw_pcie_device_mode mode;
> +
> + bool disable_clock_request;
> + bool power_down_en;
> + u8 init_link_width;
> + bool link_state;
> + u32 msi_ctrl_int;
> + u32 num_lanes;
> + u32 max_speed;
> + u32 init_speed;
> + bool cdm_check;
> + u32 cid;
> + int pex_wake;
> + bool update_fc_fixup;
> + int n_gpios;
> + int *gpios;
> +#if defined(CONFIG_PCIEASPM)
> + u32 cfg_link_cap_l1sub;
> + u32 event_cntr_ctrl;
> + u32 event_cntr_data;
> + u32 aspm_cmrt;
> + u32 aspm_pwr_on_t;
> + u32 aspm_l0s_enter_lat;
> + u32 disabled_aspm_states;
> +#endif

The above could be indented the same as the rest of the struct?

> + struct regulator*pex_ctl_reg;
> +
> + int phy_count;
> + struct phy  **phy;
> +
> + struct dentry   *debugfs;
> +};

> +static void apply_bad_link_workaround(struct pcie_port *pp)
> +{
> + struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> + struct tegra_pcie_dw *pcie = dw_pcie_to_tegra_pcie(pci);
> + u16 val;
> +
> + /*
> +  * NOTE:- Since this scenario is uncommon and link as
> +  * such is not stable anyway, not waiting to confirm
> +  * if link is really transiting to Gen-2 speed

s/transiting/transitioning/

I think there are other uses of "transit" when you mean "transition".

> +static int tegra_pcie_dw_rd_own_conf(struct pcie_port *pp, int where, int 
> size,
> +  u32 *val)
> +{
> + struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +
> + /*
> +  * This is an endpoint mode specific register happen to appear even
> +  * when controller is operating in root port mode and system hangs
> +  * when it is accessed with link being in ASPM-L1 state.
> +  * So skip accessing it altogether
> +  */
> + if (where == PORT_LOGIC_MSIX_DOORBELL) {
> + *val = 0x;
> + return PCIBIOS_SUCCESSFUL;
> + } else {
> + return dw_pcie_read(pci->dbi_base + where, size, val);
> + }
> +}
> +
> +static int tegra_pcie_dw_wr_own_conf(struct pcie_port *pp, int where, int 
> size,
> +  u32 val)
> +{
> + struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> +
> + /* This is EP specific register and system hangs when it is
> +  * accessed with link being in ASPM-L1 state.
> +  * So skip accessing it altogether
> +  */
> + if (where == PORT_LOGIC_MSIX_DOORBELL)
> + return PCIBIOS_SUCCESSFUL;
> + else
> + return dw_pcie_write(pci->dbi_base + where, size, val);

These two functions are almost identical and they could look more
similar.  This one has the wrong multi-line comment style, uses "EP"
instead of "endpoint", etc.  Use this style for the "if" since the
first case is really an error case:

  if (where == PORT_LOGIC_MSIX_DOORBELL) {
...
return ...;
  }

  return dw_pcie_...();

> +static int tegra_pcie_dw_host_init(struct pcie_port *pp)
> +{
> + struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> + struct tegra_pcie_dw *pcie = dw_pcie_to_tegra_pcie(pci);
> + int count = 200;
> + u32 val, tmp, offset;
> + u16 val_w;
> +
> +#if defined(CONFIG_PCIEASPM)
> + pcie->cfg_link_cap_l1sub =
> + dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS) +
> + PCI_L1SS_CAP;
> +#endif
> + val = dw_pcie_readl_dbi(pci, PCI_IO_BASE);
> + val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8);
> + dw_pcie_writel_dbi(pci, PCI_IO_BASE, val);
> +
> + val = dw_pcie_readl_dbi(pci, PCI_PREF_MEMORY_BASE);
> + val |= CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE;
> + val |= 

Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support

2019-03-27 Thread Jon Hunter


On 26/03/2019 15:13, Vidya Sagar wrote:
> Add support for Synopsys DesignWare core IP based PCIe host controller
> present in Tegra194 SoC.
> 
> Signed-off-by: Vidya Sagar 
> ---
>  drivers/pci/controller/dwc/Kconfig |   10 +
>  drivers/pci/controller/dwc/Makefile|1 +
>  drivers/pci/controller/dwc/pcie-tegra194.c | 1862 
> 
>  3 files changed, 1873 insertions(+)
>  create mode 100644 drivers/pci/controller/dwc/pcie-tegra194.c
> 
> diff --git a/drivers/pci/controller/dwc/Kconfig 
> b/drivers/pci/controller/dwc/Kconfig
> index 6ea74b1c0d94..d80f2d77892a 100644
> --- a/drivers/pci/controller/dwc/Kconfig
> +++ b/drivers/pci/controller/dwc/Kconfig
> @@ -213,4 +213,14 @@ config PCIE_UNIPHIER
> Say Y here if you want PCIe controller support on UniPhier SoCs.
> This driver supports LD20 and PXs3 SoCs.
>  
> +config PCIE_TEGRA194
> + bool "NVIDIA Tegra (T194) PCIe controller"
> + depends on TEGRA_BPMP && (ARCH_TEGRA || COMPILE_TEST)
> + depends on PCI_MSI_IRQ_DOMAIN
> + select PCIE_DW_HOST
> + select PHY_TEGRA194_PCIE_P2U
> + help
> +   Say Y here if you want support for DesignWare core based PCIe host
> +   controller found in NVIDIA Tegra T194 SoC.
> +

Don't we want tristate here? You have a removal function.

Cheers
Jon

-- 
nvpublic