Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
okay.. so checking whatever is the difference with _REV being 5 (meaning the firmware uses the legacy paths) doesn't help in any way. It's using a different method to turn the link of and the other ACPI variables touched either point to undocumented registers on the PCI bridge or internal ACPI memory... so, anybody with any other ideas? I really wished the nvidia driver would enable runpm on pre turing GPUs, but that's sadly not the case and on Turing things seem to be totally different, so it wouldn't help to check there as well... *sigh* On Tue, Dec 10, 2019 at 9:49 PM Karol Herbst wrote: > > On Tue, Dec 10, 2019 at 8:58 PM Dave Airlie wrote: > > > > On Mon, 9 Dec 2019 at 21:39, Rafael J. Wysocki wrote: > > > > > > On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst wrote: > > > > > > > > anybody any other ideas? > > > > > > Not yet, but I'm trying to collect some more information. > > > > > > > It seems that both patches don't really fix > > > > the issue and I have no idea left on my side to try out. The only > > > > thing left I could do to further investigate would be to reverse > > > > engineer the Nvidia driver as they support runpm on Turing+ GPUs now, > > > > but I've heard users having similar issues to the one Lyude told us > > > > about... and I couldn't verify that the patches help there either in a > > > > reliable way. > > > > > > It looks like the newer (8+) versions of Windows expect the GPU driver > > > to prepare the GPU for power removal in some specific way and the > > > latter fails if the GPU has not been prepared as expected. > > > > > > Because testing indicates that the Windows 7 path in the platform > > > firmware works, it may be worth trying to do what it does to the PCIe > > > link before invoking the _OFF method for the power resource > > > controlling the GPU power. > > > > > > > Remember the pre Win8 path required calling a DSM method to actually > > power the card down, I think by the time we reach these methods in > > those cases the card is already gone. > > > > Dave. > > > > The point was that the firmware seems to do more in the legacy paths > and maybe we just have to do those things inside the driver instead > when using the new method. Also the _DSM call just wraps around the > interfaces on newer firmware anyway. The OS check is usually what > makes the difference. I might be wrong about the _DSM call just > wrapping though, but I think I saw it at least in some firmware at > some point. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Tue, Dec 10, 2019 at 8:58 PM Dave Airlie wrote: > > On Mon, 9 Dec 2019 at 21:39, Rafael J. Wysocki wrote: > > > > On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst wrote: > > > > > > anybody any other ideas? > > > > Not yet, but I'm trying to collect some more information. > > > > > It seems that both patches don't really fix > > > the issue and I have no idea left on my side to try out. The only > > > thing left I could do to further investigate would be to reverse > > > engineer the Nvidia driver as they support runpm on Turing+ GPUs now, > > > but I've heard users having similar issues to the one Lyude told us > > > about... and I couldn't verify that the patches help there either in a > > > reliable way. > > > > It looks like the newer (8+) versions of Windows expect the GPU driver > > to prepare the GPU for power removal in some specific way and the > > latter fails if the GPU has not been prepared as expected. > > > > Because testing indicates that the Windows 7 path in the platform > > firmware works, it may be worth trying to do what it does to the PCIe > > link before invoking the _OFF method for the power resource > > controlling the GPU power. > > > > Remember the pre Win8 path required calling a DSM method to actually > power the card down, I think by the time we reach these methods in > those cases the card is already gone. > > Dave. > The point was that the firmware seems to do more in the legacy paths and maybe we just have to do those things inside the driver instead when using the new method. Also the _DSM call just wraps around the interfaces on newer firmware anyway. The OS check is usually what makes the difference. I might be wrong about the _DSM call just wrapping though, but I think I saw it at least in some firmware at some point. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Mon, 9 Dec 2019 at 21:39, Rafael J. Wysocki wrote: > > On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst wrote: > > > > anybody any other ideas? > > Not yet, but I'm trying to collect some more information. > > > It seems that both patches don't really fix > > the issue and I have no idea left on my side to try out. The only > > thing left I could do to further investigate would be to reverse > > engineer the Nvidia driver as they support runpm on Turing+ GPUs now, > > but I've heard users having similar issues to the one Lyude told us > > about... and I couldn't verify that the patches help there either in a > > reliable way. > > It looks like the newer (8+) versions of Windows expect the GPU driver > to prepare the GPU for power removal in some specific way and the > latter fails if the GPU has not been prepared as expected. > > Because testing indicates that the Windows 7 path in the platform > firmware works, it may be worth trying to do what it does to the PCIe > link before invoking the _OFF method for the power resource > controlling the GPU power. > Remember the pre Win8 path required calling a DSM method to actually power the card down, I think by the time we reach these methods in those cases the card is already gone. Dave. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Mon, Dec 9, 2019 at 12:39 PM Rafael J. Wysocki wrote: > > On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst wrote: > > > > anybody any other ideas? > > Not yet, but I'm trying to collect some more information. > > > It seems that both patches don't really fix > > the issue and I have no idea left on my side to try out. The only > > thing left I could do to further investigate would be to reverse > > engineer the Nvidia driver as they support runpm on Turing+ GPUs now, > > but I've heard users having similar issues to the one Lyude told us > > about... and I couldn't verify that the patches help there either in a > > reliable way. > > It looks like the newer (8+) versions of Windows expect the GPU driver > to prepare the GPU for power removal in some specific way and the > latter fails if the GPU has not been prepared as expected. > > Because testing indicates that the Windows 7 path in the platform > firmware works, it may be worth trying to do what it does to the PCIe > link before invoking the _OFF method for the power resource > controlling the GPU power. > ohh, that actually makes sense. Didn't think of that yet. > If the Mika's theory that the Win7 path simply turns the PCIe link off > is correct, then whatever the _OFF method tries to do to the link > after that should not matter. > By the way, and I was only thinking about it after sending my last email out, do you think we should fail the runtime resume path if the device gets stuck in a power state? Currently pci core always calls into the driver regardless, but maybe for D3cold it really makes sense to just bail and refuse to resume? I think I tried that as an early "fix" and might even have a patch around. This should at least prevent crashes inside drivers trying to access invalid memory or getting stuck in loops. > > On Wed, Nov 27, 2019 at 8:55 PM Lyude Paul wrote: > > > > > > On Wed, 2019-11-27 at 12:51 +0100, Karol Herbst wrote: > > > > On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg > > > > wrote: > > > > > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote: > > > > > > Hey-this is almost certainly not the right place in this thread to > > > > > > respond, > > > > > > but this thread has gotten so deep evolution can't push the subject > > > > > > further to > > > > > > the right, heh. So I'll just respond here. > > > > > > > > > > :) > > > > > > > > > > > I've been following this and helping out Karol with testing here and > > > > > > there. > > > > > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd > > > > > > generation, > > > > > > which > > > > > > has a turing GPU and 8086:1901 PCI bridge. > > > > > > > > > > > > I was about to say "the patch fixed things, hooray!" but it seems > > > > > > that > > > > > > after > > > > > > trying runtime suspend/resume a couple times things fall apart > > > > > > again: > > > > > > > > > > You mean $subject patch, no? > > > > > > > > > > > > > no, I told Lyude to test the pci/pm branch as the runpm errors we saw > > > > on that machine looked different. Some BAR error the GPU reported > > > > after it got resumed, so I was wondering if the delays were helping > > > > with that. But after some cycles it still caused the same issue, that > > > > the GPU disappeared. Later testing also showed that my patch also > > > > didn't seem to help with this error sadly :/ > > > > > > > > > > [ 686.883247] nouveau :01:00.0: DRM: suspending object tree... > > > > > > [ 752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO > > > > > > due > > > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > > > > > > [ 752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to > > > > > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > > > > > > [ 752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON > > > > > > due > > > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > > > > > > > > > > This is probably the culprit. The same AML code fails to properly turn > > > > > on the device. > > > > > > > > > > Is acpidump from this system available somewhere? > > > > > > Attached it to this email > > > > > > > > > > > -- > > > Cheers, > > > Lyude Paul > > > ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Mon, Dec 9, 2019 at 12:17 PM Karol Herbst wrote: > > anybody any other ideas? Not yet, but I'm trying to collect some more information. > It seems that both patches don't really fix > the issue and I have no idea left on my side to try out. The only > thing left I could do to further investigate would be to reverse > engineer the Nvidia driver as they support runpm on Turing+ GPUs now, > but I've heard users having similar issues to the one Lyude told us > about... and I couldn't verify that the patches help there either in a > reliable way. It looks like the newer (8+) versions of Windows expect the GPU driver to prepare the GPU for power removal in some specific way and the latter fails if the GPU has not been prepared as expected. Because testing indicates that the Windows 7 path in the platform firmware works, it may be worth trying to do what it does to the PCIe link before invoking the _OFF method for the power resource controlling the GPU power. If the Mika's theory that the Win7 path simply turns the PCIe link off is correct, then whatever the _OFF method tries to do to the link after that should not matter. > On Wed, Nov 27, 2019 at 8:55 PM Lyude Paul wrote: > > > > On Wed, 2019-11-27 at 12:51 +0100, Karol Herbst wrote: > > > On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg > > > wrote: > > > > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote: > > > > > Hey-this is almost certainly not the right place in this thread to > > > > > respond, > > > > > but this thread has gotten so deep evolution can't push the subject > > > > > further to > > > > > the right, heh. So I'll just respond here. > > > > > > > > :) > > > > > > > > > I've been following this and helping out Karol with testing here and > > > > > there. > > > > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation, > > > > > which > > > > > has a turing GPU and 8086:1901 PCI bridge. > > > > > > > > > > I was about to say "the patch fixed things, hooray!" but it seems that > > > > > after > > > > > trying runtime suspend/resume a couple times things fall apart again: > > > > > > > > You mean $subject patch, no? > > > > > > > > > > no, I told Lyude to test the pci/pm branch as the runpm errors we saw > > > on that machine looked different. Some BAR error the GPU reported > > > after it got resumed, so I was wondering if the delays were helping > > > with that. But after some cycles it still caused the same issue, that > > > the GPU disappeared. Later testing also showed that my patch also > > > didn't seem to help with this error sadly :/ > > > > > > > > [ 686.883247] nouveau :01:00.0: DRM: suspending object tree... > > > > > [ 752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO > > > > > due > > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > > > > > [ 752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to > > > > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > > > > > [ 752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due > > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > > > > > > > > This is probably the culprit. The same AML code fails to properly turn > > > > on the device. > > > > > > > > Is acpidump from this system available somewhere? > > > > Attached it to this email > > > > > > > > -- > > Cheers, > > Lyude Paul > ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
anybody any other ideas? It seems that both patches don't really fix the issue and I have no idea left on my side to try out. The only thing left I could do to further investigate would be to reverse engineer the Nvidia driver as they support runpm on Turing+ GPUs now, but I've heard users having similar issues to the one Lyude told us about... and I couldn't verify that the patches help there either in a reliable way. On Wed, Nov 27, 2019 at 8:55 PM Lyude Paul wrote: > > On Wed, 2019-11-27 at 12:51 +0100, Karol Herbst wrote: > > On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg > > wrote: > > > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote: > > > > Hey-this is almost certainly not the right place in this thread to > > > > respond, > > > > but this thread has gotten so deep evolution can't push the subject > > > > further to > > > > the right, heh. So I'll just respond here. > > > > > > :) > > > > > > > I've been following this and helping out Karol with testing here and > > > > there. > > > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation, > > > > which > > > > has a turing GPU and 8086:1901 PCI bridge. > > > > > > > > I was about to say "the patch fixed things, hooray!" but it seems that > > > > after > > > > trying runtime suspend/resume a couple times things fall apart again: > > > > > > You mean $subject patch, no? > > > > > > > no, I told Lyude to test the pci/pm branch as the runpm errors we saw > > on that machine looked different. Some BAR error the GPU reported > > after it got resumed, so I was wondering if the delays were helping > > with that. But after some cycles it still caused the same issue, that > > the GPU disappeared. Later testing also showed that my patch also > > didn't seem to help with this error sadly :/ > > > > > > [ 686.883247] nouveau :01:00.0: DRM: suspending object tree... > > > > [ 752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > > > > [ 752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to > > > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > > > > [ 752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due > > > > to previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > > > > > > This is probably the culprit. The same AML code fails to properly turn > > > on the device. > > > > > > Is acpidump from this system available somewhere? > > Attached it to this email > > > > > -- > Cheers, > Lyude Paul ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 27, 2019 at 12:49 PM Mika Westerberg wrote: > > On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote: > > Hey-this is almost certainly not the right place in this thread to respond, > > but this thread has gotten so deep evolution can't push the subject further > > to > > the right, heh. So I'll just respond here. > > :) > > > I've been following this and helping out Karol with testing here and there. > > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation, which > > has a turing GPU and 8086:1901 PCI bridge. > > > > I was about to say "the patch fixed things, hooray!" but it seems that after > > trying runtime suspend/resume a couple times things fall apart again: > > You mean $subject patch, no? > no, I told Lyude to test the pci/pm branch as the runpm errors we saw on that machine looked different. Some BAR error the GPU reported after it got resumed, so I was wondering if the delays were helping with that. But after some cycles it still caused the same issue, that the GPU disappeared. Later testing also showed that my patch also didn't seem to help with this error sadly :/ > > [ 686.883247] nouveau :01:00.0: DRM: suspending object tree... > > [ 752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due to > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > > [ 752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to previous > > error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > > [ 752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due to > > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > > This is probably the culprit. The same AML code fails to properly turn > on the device. > > Is acpidump from this system available somewhere? > ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Tue, Nov 26, 2019 at 06:10:36PM -0500, Lyude Paul wrote: > Hey-this is almost certainly not the right place in this thread to respond, > but this thread has gotten so deep evolution can't push the subject further to > the right, heh. So I'll just respond here. :) > I've been following this and helping out Karol with testing here and there. > They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation, which > has a turing GPU and 8086:1901 PCI bridge. > > I was about to say "the patch fixed things, hooray!" but it seems that after > trying runtime suspend/resume a couple times things fall apart again: You mean $subject patch, no? > [ 686.883247] nouveau :01:00.0: DRM: suspending object tree... > [ 752.866484] ACPI Error: Aborting method \_SB.PCI0.PEG0.PEGP.NVPO due to > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > [ 752.866508] ACPI Error: Aborting method \_SB.PCI0.PGON due to previous > error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) > [ 752.866521] ACPI Error: Aborting method \_SB.PCI0.PEG0.PG00._ON due to > previous error (AE_AML_LOOP_TIMEOUT) (20190816/psparse-529) This is probably the culprit. The same AML code fails to properly turn on the device. Is acpidump from this system available somewhere? ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
[big snip] > There is a sysfs attribute called "d3cold_allowed" which can be used > for "blocking" D3cold, so can you please retest using that? > Hey-this is almost certainly not the right place in this thread to respond, but this thread has gotten so deep evolution can't push the subject further to the right, heh. So I'll just respond here. I've been following this and helping out Karol with testing here and there. They had me test Bjorn's PCI branch on the X1 Extreme 2nd generation, which has a turing GPU and 8086:1901 PCI bridge. I was about to say "the patch fixed things, hooray!" but it seems that after trying runtime suspend/resume a couple times things fall apart again: [ 27.680433] nouveau :01:00.0: enabling device ( -> 0003) [ 27.680578] nouveau :01:00.0: NVIDIA TU117 (167000a1) [ 27.763967] nouveau :01:00.0: bios: version 90.17.20.00.16 [ 27.764664] nouveau :01:00.0: fb: 4096 MiB GDDR5 [ 27.806115] vga_switcheroo: enabled [ 27.806221] [TTM] Zone kernel: Available graphics memory: 16244510 KiB [ 27.806222] [TTM] Zone dma32: Available graphics memory: 2097152 KiB [ 27.806222] [TTM] Initializing pool allocator [ 27.806224] [TTM] Initializing DMA pool allocator [ 27.806249] nouveau :01:00.0: DRM: VRAM: 4096 MiB [ 27.806249] nouveau :01:00.0: DRM: GART: 536870912 MiB [ 27.806250] nouveau :01:00.0: DRM: BIT table 'A' not found [ 27.806251] nouveau :01:00.0: DRM: BIT table 'L' not found [ 27.806251] nouveau :01:00.0: DRM: TMDS table version 2.0 [ 27.806252] nouveau :01:00.0: DRM: DCB version 4.1 [ 27.806253] nouveau :01:00.0: DRM: DCB outp 00: 02800f66 04600020 [ 27.806253] nouveau :01:00.0: DRM: DCB outp 01: 02011f52 00020010 [ 27.806254] nouveau :01:00.0: DRM: DCB outp 02: 01022f36 04600010 [ 27.806254] nouveau :01:00.0: DRM: DCB outp 03: 01033f46 04600020 [ 27.806255] nouveau :01:00.0: DRM: DCB conn 00: 00020047 [ 27.806255] nouveau :01:00.0: DRM: DCB conn 01: 00010161 [ 27.806256] nouveau :01:00.0: DRM: DCB conn 02: 1248 [ 27.806256] nouveau :01:00.0: DRM: DCB conn 03: 2348 [ 27.806257] nouveau :01:00.0: DRM: Pointer to flat panel table invalid [ 27.806415] nouveau :01:00.0: DRM: failed to create kernel channel, -22 [ 27.806530] nouveau :01:00.0: DRM: MM: using COPY for buffer copies [ 28.114808] nouveau :01:00.0: DRM: unknown connector type 48 [ 28.114943] nouveau :01:00.0: DRM: unknown connector type 48 [ 28.115026] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013). [ 28.115027] [drm] Driver supports precise vblank timestamp query. [ 28.116611] [drm] Cannot find any crtc or sizes [ 28.117452] [drm] Initialized nouveau 1.3.1 20120801 for :01:00.0 on minor 1 [ 28.118074] [drm] Cannot find any crtc or sizes [ 28.119523] [drm] Cannot find any crtc or sizes [ 34.081503] nouveau :01:00.0: DRM: suspending console... [ 34.081508] nouveau :01:00.0: DRM: suspending display... [ 34.081528] nouveau :01:00.0: DRM: evicting buffers... [ 34.081531] nouveau :01:00.0: DRM: waiting for kernel channels to go idle... [ 34.081551] nouveau :01:00.0: DRM: suspending fence... [ 34.091173] nouveau :01:00.0: DRM: suspending object tree... [ 37.729746] nouveau :01:00.0: DRM: resuming object tree... [ 38.042076] nouveau :01:00.0: DRM: resuming fence... [ 38.042167] nouveau :01:00.0: DRM: resuming display... [ 38.042175] nouveau :01:00.0: DRM: resuming console... [ 44.309325] nouveau :01:00.0: DRM: suspending console... [ 44.309331] nouveau :01:00.0: DRM: suspending display... [ 44.309349] nouveau :01:00.0: DRM: evicting buffers... [ 44.309352] nouveau :01:00.0: DRM: waiting for kernel channels to go idle... [ 44.309371] nouveau :01:00.0: DRM: suspending fence... [ 44.318938] nouveau :01:00.0: DRM: suspending object tree... [ 76.577644] nouveau :01:00.0: DRM: resuming object tree... [ 76.890266] nouveau :01:00.0: DRM: resuming fence... [ 76.890362] nouveau :01:00.0: DRM: resuming display... [ 76.890379] nouveau :01:00.0: DRM: resuming console... [ 82.721356] nouveau :01:00.0: DRM: suspending console... [ 82.721361] nouveau :01:00.0: DRM: suspending display... [ 82.721380] nouveau :01:00.0: DRM: evicting buffers... [ 82.721383] nouveau :01:00.0: DRM: waiting for kernel channels to go idle... [ 82.721403] nouveau :01:00.0: DRM: suspending fence... [ 82.730483] nouveau :01:00.0: DRM: suspending object tree... [ 681.369950] nouveau :01:00.0: DRM: resuming object tree... [ 681.690464] nouveau :01:00.0: DRM: resuming fence... [ 681.690555] nouveau :01:00.0: DRM: resuming display... [ 681.690568] nouveau :01:00.0: DRM: resuming console... [ 686.873629] nouveau :01:00.0: DRM: suspending console... [ 686.873634] nouveau :01:00.0: DRM: suspending display... [
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Fri, Nov 22, 2019 at 12:52 PM Mika Westerberg wrote: > [cut] [I'm really running out of time for today, unfortunately.] > > > > The current design is mostly based on the PCI PM Spec 1.2, so it would > > > > be consequent to follow system-wide suspend in PM-runtime and avoid > > > > putting PCIe ports holding devices in D0 into any low-power states. > > > > but that would make the approach in the $subject patch ineffective. > > > > > > > > Moreover, the fact that there are separate branches for "Windows 7" > > > > and "Windows 8+" kind of suggest a change in the expected behavior > > > > between Windows 7 and Windows 8, from the AML perspective. I would > > > > guess that Windows 7 followed PCI PM 1.2 and Windows 8 (and later) > > > > does something else. > > > > > > My understanding (which may not be correct) is that up to Windows 7 it > > > never put the devices into D3cold runtime. Only when the system entered > > > Sx states it evaluated the _OFF methods. > > > > I see. I think I have misunderstood what you said. I also think that Windows 7 and before didin't do RTD3, but it did PCI PM nevertheless and platform firmware could expect it to behave in a specific way in that respect. That expected behavior seems to have changed in the next generations of Windows, as reflected by the OS version and _REV checks in ASL. > > > Starting from Windows 8 it started doing this runtime so devices can > > > enter D3cold even when system is in S0. > > > > Hmm. So by setting _REV to 5 we effectively change the _OFF into a NOP? > > No, there are two paths in the _OFF() and them some common code such as > removing power etc. > > What the _REV 5 did is that it went into path where the AML seemed to > directly disable the link. > > The other path that is taken with Windows 8+ does not disable the link > but instead it puts it to low power L2 or L3 state (I suppose L3 since > it removes the power and the GPU probably does not support wake). OK, so the very existence of the two paths means that the OS behavior expected by the firmware in the two cases represented by them is different. Presumably, the expected hardware configuration in which the AML runs also is different in these two cases. > The ASL code is below. PGOF() gets called from the power resource > _OFF(): I'll look at it in detail when I have some more time later. > Method (PGOF, 1, Serialized) > { > PIOF = Arg0 > If ((PIOF == Zero)) > { > If ((SGGP == Zero)) > { > Return (Zero) > } > } > ElseIf ((PIOF == One)) > { > If ((P1GP == Zero)) > { > Return (Zero) > } > } > ElseIf ((PIOF == 0x02)) > { > If ((P2GP == Zero)) > { > Return (Zero) > } > } > > PEBA = \XBAS /* External reference */ > PDEV = GDEV (PIOF) > PFUN = GFUN (PIOF) > Name (SCLK, Package (0x03) > { > One, > 0x80, > Zero > }) > If ((CCHK (PIOF, Zero) == Zero)) > { > Return (Zero) > } > > \_SB.PCI0.PEG0.PEGP.LTRE = \_SB.PCI0.PEG0.LREN > If ((Arg0 == Zero)) > { > ELC0 = LCT0 /* \_SB_.PCI0.LCT0 */ > H0VI = S0VI /* \_SB_.PCI0.S0VI */ > H0DI = S0DI /* \_SB_.PCI0.S0DI */ > ECP0 = LCP0 /* \_SB_.PCI0.LCP0 */ > } > ElseIf ((Arg0 == One)) > { > ELC1 = LCT1 /* \_SB_.PCI0.LCT1 */ > H1VI = S1VI /* \_SB_.PCI0.S1VI */ > H1DI = S1DI /* \_SB_.PCI0.S1DI */ > ECP1 = LCP1 /* \_SB_.PCI0.LCP1 */ > } > ElseIf ((Arg0 == 0x02)) > { > ELC2 = LCT2 /* \_SB_.PCI0.LCT2 */ > H2VI = S2VI /* \_SB_.PCI0.S2VI */ > H2DI = S2DI /* \_SB_.PCI0.S2DI */ > ECP2 = LCP2 /* \_SB_.PCI0.LCP2 */ > } > > If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == > 0x05 > { > If ((PIOF == Zero)) > { > P0LD = One > TCNT = Zero > While ((TCNT < LDLY)) > { > If ((P0LT == 0x08)) > { > Break > } > > Sleep (0x10) > TCNT += 0x10 > } > > P0RM = One > P0AP = 0x03 > } > ElseIf ((PIOF == One)) > { > P1LD = One >
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Fri, Nov 22, 2019 at 12:34 PM Karol Herbst wrote: > > On Fri, Nov 22, 2019 at 12:30 PM Rafael J. Wysocki wrote: > > [cut] > > > > the issue is not AML related at all as I am able to reproduce this > issue without having to invoke any of that at all, I just need to poke > into the PCI register directly to cut the power. Since the register is not documented, you don't actually know what exactly happens when it is written to. You basically are saying something like "if I write a specific value to an undocumented register, that makes things fail". And yes, writing things to undocumented registers is likely to cause failure to happen, in general. The point is that the kernel will never write into this register by itself. > The register is not documented, but effectively what the AML code is writing > to as well. So that AML code is problematic. It expects the write to do something useful, but that's not the case. Without the AML, the register would not have been written to at all. > Of course it might also be that the code I was testing it was doing > things in a non conformant way and I just hit a different issue as > well, but in the end I don't think that the AML code is the root cause > of all of that. If AML is not involved at all, things work. You've just said so in another message in this thread, quoting verbatim: "yes. In my previous testing I was poking into the PCI registers of the bridge controller and the GPU directly and that never caused any issues as long as I limited it to putting the devices into D3hot." You cannot claim a hardware bug just because a write to an undocumented register from AML causes things to break. First, that may be a bug in the AML (which is not unheard of). Second, and that is more likely, the expectations of the AML code may not be met at the time it is run. Assuming the latter, the root cause is really that the kernel executes the AML in a hardware configuration in which the expectations of that AML are not met. We are now trying to understand what those expectations may be and so how to cause them to be met. Your observation that the issue can be avoided if the GPU is not put into D3hot by a PMCSR write is a step in that direction and it is a good finding. The information from Mika based on the ASL analysis is helpful too. Let's not jump to premature conclusions too quickly, though. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Fri, Nov 22, 2019 at 12:30:20PM +0100, Rafael J. Wysocki wrote: > On Fri, Nov 22, 2019 at 11:36 AM Mika Westerberg > wrote: > > > > On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote: > > > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg > > > wrote: > > > > > > > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote: > > > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg > > > > > wrote: > > > > > > > > > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote: > > > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki > > > > > > > > > wrote: > > > > > > > > > > > last week or so I found systems where the GPU was under > > > > > > > > > > > the "PCI > > > > > > > > > > > Express Root Port" (name from lspci) and on those systems > > > > > > > > > > > all of that > > > > > > > > > > > seems to work. So I am wondering if it's indeed just the > > > > > > > > > > > 0x1901 one, > > > > > > > > > > > which also explains Mikas case that Thunderbolt stuff > > > > > > > > > > > works as devices > > > > > > > > > > > never get populated under this particular bridge > > > > > > > > > > > controller, but under > > > > > > > > > > > those "Root Port"s > > > > > > > > > > > > > > > > > > > > It always is a PCIe port, but its location within the SoC > > > > > > > > > > may matter. > > > > > > > > > > > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are > > > > > > > > > called > > > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think > > > > > > > > > the IP is > > > > > > > > > still the same. > > > > > > > > > > > > > > > > > > > Also some custom AML-based power management is involved and > > > > > > > > > > that may > > > > > > > > > > be making specific assumptions on the configuration of the > > > > > > > > > > SoC and the > > > > > > > > > > GPU at the time of its invocation which unfortunately are > > > > > > > > > > not known to > > > > > > > > > > us. > > > > > > > > > > > > > > > > > > > > However, it looks like the AML invoked to power down the > > > > > > > > > > GPU from > > > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in > > > > > > > > > > PCI D0 at > > > > > > > > > > that point, so it looks like that AML tries to access > > > > > > > > > > device memory on > > > > > > > > > > the GPU (beyond the PCI config space) or similar which is > > > > > > > > > > not > > > > > > > > > > accessible in PCI power states below D0. > > > > > > > > > > > > > > > > > > Or the PCI config space of the GPU when the parent root port > > > > > > > > > is in D3hot > > > > > > > > > (as it is the case here). Also then the GPU config space is > > > > > > > > > not > > > > > > > > > accessible. > > > > > > > > > > > > > > > > Why would the parent port be in D3hot at that point? Wouldn't > > > > > > > > that be > > > > > > > > a suspend ordering violation? > > > > > > > > > > > > > > No. We put the GPU into D3hot first, > > > > > > > > > > OK > > > > > > > > > > Does this involve any AML, like a _PS3 under the GPU object? > > > > > > > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU > > > > itself is not described in ACPI tables at all. > > > > > > OK > > > > > > > > > > then the root port and then turn > > > > > > > off the power resource (which is attached to the root port) > > > > > > > resulting > > > > > > > the topology entering D3cold. > > > > > > > > > > > > I don't see that happening in the AML though. > > > > > > > > > > Which AML do you mean, specifically? The _OFF method for the root > > > > > port's _PR3 power resource or something else? > > > > > > > > The root port's _OFF method for the power resource returned by its _PR3. > > > > > > OK, so without the $subject patch we (1) program the downstream > > > component (GPU) into D3hot, then we (2) program the port holding it > > > into D3hot and then we (3) let the AML (_OFF for the power resource > > > listed by _PR3 under the port object) run. > > > > > > Something strange happens at this point (and I guess that _OFF doesn't > > > even reach the point where it removes power from the port which is why > > > we see a lock-up). > > > > It does not necessary lead to lock-up. Here is dmesg from Karol's > > system: > > > > > > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt > > > > what seems to happen is that the GPU never "comes back" from D3cold so > > the driver starts breaking apart as the hardware is gone now. > > I meant a lock-up in hardware to be precise, that causes it to stop to > respond (which causes it to appear to be permanently in D3cold). > > I wonder if the port accepts PCI config space write
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Fri, Nov 22, 2019 at 12:30 PM Rafael J. Wysocki wrote: > > On Fri, Nov 22, 2019 at 11:36 AM Mika Westerberg > wrote: > > > > On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote: > > > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg > > > wrote: > > > > > > > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote: > > > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg > > > > > wrote: > > > > > > > > > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote: > > > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki > > > > > > > > > wrote: > > > > > > > > > > > last week or so I found systems where the GPU was under > > > > > > > > > > > the "PCI > > > > > > > > > > > Express Root Port" (name from lspci) and on those systems > > > > > > > > > > > all of that > > > > > > > > > > > seems to work. So I am wondering if it's indeed just the > > > > > > > > > > > 0x1901 one, > > > > > > > > > > > which also explains Mikas case that Thunderbolt stuff > > > > > > > > > > > works as devices > > > > > > > > > > > never get populated under this particular bridge > > > > > > > > > > > controller, but under > > > > > > > > > > > those "Root Port"s > > > > > > > > > > > > > > > > > > > > It always is a PCIe port, but its location within the SoC > > > > > > > > > > may matter. > > > > > > > > > > > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are > > > > > > > > > called > > > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think > > > > > > > > > the IP is > > > > > > > > > still the same. > > > > > > > > > > > > > > > > > > > Also some custom AML-based power management is involved and > > > > > > > > > > that may > > > > > > > > > > be making specific assumptions on the configuration of the > > > > > > > > > > SoC and the > > > > > > > > > > GPU at the time of its invocation which unfortunately are > > > > > > > > > > not known to > > > > > > > > > > us. > > > > > > > > > > > > > > > > > > > > However, it looks like the AML invoked to power down the > > > > > > > > > > GPU from > > > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in > > > > > > > > > > PCI D0 at > > > > > > > > > > that point, so it looks like that AML tries to access > > > > > > > > > > device memory on > > > > > > > > > > the GPU (beyond the PCI config space) or similar which is > > > > > > > > > > not > > > > > > > > > > accessible in PCI power states below D0. > > > > > > > > > > > > > > > > > > Or the PCI config space of the GPU when the parent root port > > > > > > > > > is in D3hot > > > > > > > > > (as it is the case here). Also then the GPU config space is > > > > > > > > > not > > > > > > > > > accessible. > > > > > > > > > > > > > > > > Why would the parent port be in D3hot at that point? Wouldn't > > > > > > > > that be > > > > > > > > a suspend ordering violation? > > > > > > > > > > > > > > No. We put the GPU into D3hot first, > > > > > > > > > > OK > > > > > > > > > > Does this involve any AML, like a _PS3 under the GPU object? > > > > > > > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU > > > > itself is not described in ACPI tables at all. > > > > > > OK > > > > > > > > > > then the root port and then turn > > > > > > > off the power resource (which is attached to the root port) > > > > > > > resulting > > > > > > > the topology entering D3cold. > > > > > > > > > > > > I don't see that happening in the AML though. > > > > > > > > > > Which AML do you mean, specifically? The _OFF method for the root > > > > > port's _PR3 power resource or something else? > > > > > > > > The root port's _OFF method for the power resource returned by its _PR3. > > > > > > OK, so without the $subject patch we (1) program the downstream > > > component (GPU) into D3hot, then we (2) program the port holding it > > > into D3hot and then we (3) let the AML (_OFF for the power resource > > > listed by _PR3 under the port object) run. > > > > > > Something strange happens at this point (and I guess that _OFF doesn't > > > even reach the point where it removes power from the port which is why > > > we see a lock-up). > > > > It does not necessary lead to lock-up. Here is dmesg from Karol's > > system: > > > > > > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt > > > > what seems to happen is that the GPU never "comes back" from D3cold so > > the driver starts breaking apart as the hardware is gone now. > > I meant a lock-up in hardware to be precise, that causes it to stop to > respond (which causes it to appear to be permanently in D3cold). > > I wonder if the port accepts PCI config space writes then.
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Fri, Nov 22, 2019 at 10:07 AM Rafael J. Wysocki wrote: > > On Fri, Nov 22, 2019 at 1:13 AM Karol Herbst wrote: > > > > so while trying to test with d3cold disabled, I noticed that I run > > into the exact same error. > > Does this mean that you disabled d3cold on the GPU via sysfs (the > "d3cold_allowed" attribute was 0) and the original problem still > occurred in that configuration? > yes. In my previous testing I was poking into the PCI registers of the bridge controller and the GPU directly and that never caused any issues as long as I limited it to putting the devices into D3hot. > > And I verified that the > > \_SB.PCI0.PEG0.PG00._STA returns 1, which indicates it should still be > > turned on. > > I don't really understand this comment, so can you explain it a bit to > me, please? > that's the ACPI method to fetch the "status" of the power resource, it should return 0 when the device was powered off (the GPU) and 1 otherwise. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Fri, Nov 22, 2019 at 11:36 AM Mika Westerberg wrote: > > On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote: > > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg > > wrote: > > > > > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote: > > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg > > > > wrote: > > > > > > > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote: > > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > > > > > wrote: > > > > > > > > > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki > > > > > > > > wrote: > > > > > > > > > > last week or so I found systems where the GPU was under the > > > > > > > > > > "PCI > > > > > > > > > > Express Root Port" (name from lspci) and on those systems > > > > > > > > > > all of that > > > > > > > > > > seems to work. So I am wondering if it's indeed just the > > > > > > > > > > 0x1901 one, > > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works > > > > > > > > > > as devices > > > > > > > > > > never get populated under this particular bridge > > > > > > > > > > controller, but under > > > > > > > > > > those "Root Port"s > > > > > > > > > > > > > > > > > > It always is a PCIe port, but its location within the SoC may > > > > > > > > > matter. > > > > > > > > > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are > > > > > > > > called > > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think > > > > > > > > the IP is > > > > > > > > still the same. > > > > > > > > > > > > > > > > > Also some custom AML-based power management is involved and > > > > > > > > > that may > > > > > > > > > be making specific assumptions on the configuration of the > > > > > > > > > SoC and the > > > > > > > > > GPU at the time of its invocation which unfortunately are not > > > > > > > > > known to > > > > > > > > > us. > > > > > > > > > > > > > > > > > > However, it looks like the AML invoked to power down the GPU > > > > > > > > > from > > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI > > > > > > > > > D0 at > > > > > > > > > that point, so it looks like that AML tries to access device > > > > > > > > > memory on > > > > > > > > > the GPU (beyond the PCI config space) or similar which is not > > > > > > > > > accessible in PCI power states below D0. > > > > > > > > > > > > > > > > Or the PCI config space of the GPU when the parent root port is > > > > > > > > in D3hot > > > > > > > > (as it is the case here). Also then the GPU config space is not > > > > > > > > accessible. > > > > > > > > > > > > > > Why would the parent port be in D3hot at that point? Wouldn't > > > > > > > that be > > > > > > > a suspend ordering violation? > > > > > > > > > > > > No. We put the GPU into D3hot first, > > > > > > > > OK > > > > > > > > Does this involve any AML, like a _PS3 under the GPU object? > > > > > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU > > > itself is not described in ACPI tables at all. > > > > OK > > > > > > > > then the root port and then turn > > > > > > off the power resource (which is attached to the root port) > > > > > > resulting > > > > > > the topology entering D3cold. > > > > > > > > > > I don't see that happening in the AML though. > > > > > > > > Which AML do you mean, specifically? The _OFF method for the root > > > > port's _PR3 power resource or something else? > > > > > > The root port's _OFF method for the power resource returned by its _PR3. > > > > OK, so without the $subject patch we (1) program the downstream > > component (GPU) into D3hot, then we (2) program the port holding it > > into D3hot and then we (3) let the AML (_OFF for the power resource > > listed by _PR3 under the port object) run. > > > > Something strange happens at this point (and I guess that _OFF doesn't > > even reach the point where it removes power from the port which is why > > we see a lock-up). > > It does not necessary lead to lock-up. Here is dmesg from Karol's > system: > > > https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt > > what seems to happen is that the GPU never "comes back" from D3cold so > the driver starts breaking apart as the hardware is gone now. I meant a lock-up in hardware to be precise, that causes it to stop to respond (which causes it to appear to be permanently in D3cold). I wonder if the port accepts PCI config space writes then. > > We know that skipping (1) makes things work and we kind of suspect > > that skipping (3) would make things work either, but what about doing > > (1) and (3) without (2)? > > You mean in this particular case or in general? In this case in particular to start with. Just do an experiment to see what happens if we
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 11:39:23PM +0100, Rafael J. Wysocki wrote: > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg > wrote: > > > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote: > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg > > > wrote: > > > > > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote: > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > > > > wrote: > > > > > > > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > > > > > > > last week or so I found systems where the GPU was under the > > > > > > > > > "PCI > > > > > > > > > Express Root Port" (name from lspci) and on those systems all > > > > > > > > > of that > > > > > > > > > seems to work. So I am wondering if it's indeed just the > > > > > > > > > 0x1901 one, > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works > > > > > > > > > as devices > > > > > > > > > never get populated under this particular bridge controller, > > > > > > > > > but under > > > > > > > > > those "Root Port"s > > > > > > > > > > > > > > > > It always is a PCIe port, but its location within the SoC may > > > > > > > > matter. > > > > > > > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are > > > > > > > called > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the > > > > > > > IP is > > > > > > > still the same. > > > > > > > > > > > > > > > Also some custom AML-based power management is involved and > > > > > > > > that may > > > > > > > > be making specific assumptions on the configuration of the SoC > > > > > > > > and the > > > > > > > > GPU at the time of its invocation which unfortunately are not > > > > > > > > known to > > > > > > > > us. > > > > > > > > > > > > > > > > However, it looks like the AML invoked to power down the GPU > > > > > > > > from > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 > > > > > > > > at > > > > > > > > that point, so it looks like that AML tries to access device > > > > > > > > memory on > > > > > > > > the GPU (beyond the PCI config space) or similar which is not > > > > > > > > accessible in PCI power states below D0. > > > > > > > > > > > > > > Or the PCI config space of the GPU when the parent root port is > > > > > > > in D3hot > > > > > > > (as it is the case here). Also then the GPU config space is not > > > > > > > accessible. > > > > > > > > > > > > Why would the parent port be in D3hot at that point? Wouldn't that > > > > > > be > > > > > > a suspend ordering violation? > > > > > > > > > > No. We put the GPU into D3hot first, > > > > > > OK > > > > > > Does this involve any AML, like a _PS3 under the GPU object? > > > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU > > itself is not described in ACPI tables at all. > > OK > > > > > > then the root port and then turn > > > > > off the power resource (which is attached to the root port) resulting > > > > > the topology entering D3cold. > > > > > > > > I don't see that happening in the AML though. > > > > > > Which AML do you mean, specifically? The _OFF method for the root > > > port's _PR3 power resource or something else? > > > > The root port's _OFF method for the power resource returned by its _PR3. > > OK, so without the $subject patch we (1) program the downstream > component (GPU) into D3hot, then we (2) program the port holding it > into D3hot and then we (3) let the AML (_OFF for the power resource > listed by _PR3 under the port object) run. > > Something strange happens at this point (and I guess that _OFF doesn't > even reach the point where it removes power from the port which is why > we see a lock-up). It does not necessary lead to lock-up. Here is dmesg from Karol's system: https://gist.githubusercontent.com/karolherbst/40eb091c7b7b33ef993525de660f1a3b/raw/2380e31f566e93e5ba7c87ef545420965d4c492c/gistfile1.txt what seems to happen is that the GPU never "comes back" from D3cold so the driver starts breaking apart as the hardware is gone now. > We know that skipping (1) makes things work and we kind of suspect > that skipping (3) would make things work either, but what about doing > (1) and (3) without (2)? You mean in this particular case or in general? Because if the port has _PSx methods we need to put it into D3hot AFAIK. > > > > Basically the difference is that when Windows 7 or Linux (the _REV==5 > > > > check) then we directly do link disable whereas in Windows 8+ we invoke > > > > LKDS() method that puts the link into L2/L3. None of the fields they > > > > access seem to touch the GPU itself. > > > > > > So that may be where the problem is. > > > > > > Putting the downstream component into PCI D[1-3] is expected to put > > > the link into L1, so I'm not sure how that plays with the later > > > attempt to put it int
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Fri, Nov 22, 2019 at 1:13 AM Karol Herbst wrote: > > so while trying to test with d3cold disabled, I noticed that I run > into the exact same error. Does this mean that you disabled d3cold on the GPU via sysfs (the "d3cold_allowed" attribute was 0) and the original problem still occurred in that configuration? > And I verified that the > \_SB.PCI0.PEG0.PG00._STA returns 1, which indicates it should still be > turned on. I don't really understand this comment, so can you explain it a bit to me, please? ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
so while trying to test with d3cold disabled, I noticed that I run into the exact same error. And I verified that the \_SB.PCI0.PEG0.PG00._STA returns 1, which indicates it should still be turned on. On Thu, Nov 21, 2019 at 11:50 PM Karol Herbst wrote: > > On Thu, Nov 21, 2019 at 11:39 PM Rafael J. Wysocki wrote: > > > > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg > > wrote: > > > > > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote: > > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg > > > > wrote: > > > > > > > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote: > > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > > > > > wrote: > > > > > > > > > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki > > > > > > > > wrote: > > > > > > > > > > last week or so I found systems where the GPU was under the > > > > > > > > > > "PCI > > > > > > > > > > Express Root Port" (name from lspci) and on those systems > > > > > > > > > > all of that > > > > > > > > > > seems to work. So I am wondering if it's indeed just the > > > > > > > > > > 0x1901 one, > > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works > > > > > > > > > > as devices > > > > > > > > > > never get populated under this particular bridge > > > > > > > > > > controller, but under > > > > > > > > > > those "Root Port"s > > > > > > > > > > > > > > > > > > It always is a PCIe port, but its location within the SoC may > > > > > > > > > matter. > > > > > > > > > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are > > > > > > > > called > > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think > > > > > > > > the IP is > > > > > > > > still the same. > > > > > > > > > > > > > > > > > Also some custom AML-based power management is involved and > > > > > > > > > that may > > > > > > > > > be making specific assumptions on the configuration of the > > > > > > > > > SoC and the > > > > > > > > > GPU at the time of its invocation which unfortunately are not > > > > > > > > > known to > > > > > > > > > us. > > > > > > > > > > > > > > > > > > However, it looks like the AML invoked to power down the GPU > > > > > > > > > from > > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI > > > > > > > > > D0 at > > > > > > > > > that point, so it looks like that AML tries to access device > > > > > > > > > memory on > > > > > > > > > the GPU (beyond the PCI config space) or similar which is not > > > > > > > > > accessible in PCI power states below D0. > > > > > > > > > > > > > > > > Or the PCI config space of the GPU when the parent root port is > > > > > > > > in D3hot > > > > > > > > (as it is the case here). Also then the GPU config space is not > > > > > > > > accessible. > > > > > > > > > > > > > > Why would the parent port be in D3hot at that point? Wouldn't > > > > > > > that be > > > > > > > a suspend ordering violation? > > > > > > > > > > > > No. We put the GPU into D3hot first, > > > > > > > > OK > > > > > > > > Does this involve any AML, like a _PS3 under the GPU object? > > > > > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU > > > itself is not described in ACPI tables at all. > > > > OK > > > > > > > > then the root port and then turn > > > > > > off the power resource (which is attached to the root port) > > > > > > resulting > > > > > > the topology entering D3cold. > > > > > > > > > > I don't see that happening in the AML though. > > > > > > > > Which AML do you mean, specifically? The _OFF method for the root > > > > port's _PR3 power resource or something else? > > > > > > The root port's _OFF method for the power resource returned by its _PR3. > > > > OK, so without the $subject patch we (1) program the downstream > > component (GPU) into D3hot, then we (2) program the port holding it > > into D3hot and then we (3) let the AML (_OFF for the power resource > > listed by _PR3 under the port object) run. > > > > Something strange happens at this point (and I guess that _OFF doesn't > > even reach the point where it removes power from the port which is why > > we see a lock-up). > > > > it does though. I will post the data shortly (with the change in power > consumption), as I also want to do the ACPI traces now. > > > We know that skipping (1) makes things work and we kind of suspect > > that skipping (3) would make things work either, but what about doing > > (1) and (3) without (2)? > > > > > > > Basically the difference is that when Windows 7 or Linux (the _REV==5 > > > > > check) then we directly do link disable whereas in Windows 8+ we > > > > > invoke > > > > > LKDS() method that puts the link into L2/L3. None of the fields they > > > > > access seem to touch the GPU itself. > > > > > > > > So that may be where the problem is. > > > > > > > > Putt
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 11:39 PM Rafael J. Wysocki wrote: > > On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg > wrote: > > > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote: > > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg > > > wrote: > > > > > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote: > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > > > > wrote: > > > > > > > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > > > > > > > last week or so I found systems where the GPU was under the > > > > > > > > > "PCI > > > > > > > > > Express Root Port" (name from lspci) and on those systems all > > > > > > > > > of that > > > > > > > > > seems to work. So I am wondering if it's indeed just the > > > > > > > > > 0x1901 one, > > > > > > > > > which also explains Mikas case that Thunderbolt stuff works > > > > > > > > > as devices > > > > > > > > > never get populated under this particular bridge controller, > > > > > > > > > but under > > > > > > > > > those "Root Port"s > > > > > > > > > > > > > > > > It always is a PCIe port, but its location within the SoC may > > > > > > > > matter. > > > > > > > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are > > > > > > > called > > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the > > > > > > > IP is > > > > > > > still the same. > > > > > > > > > > > > > > > Also some custom AML-based power management is involved and > > > > > > > > that may > > > > > > > > be making specific assumptions on the configuration of the SoC > > > > > > > > and the > > > > > > > > GPU at the time of its invocation which unfortunately are not > > > > > > > > known to > > > > > > > > us. > > > > > > > > > > > > > > > > However, it looks like the AML invoked to power down the GPU > > > > > > > > from > > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 > > > > > > > > at > > > > > > > > that point, so it looks like that AML tries to access device > > > > > > > > memory on > > > > > > > > the GPU (beyond the PCI config space) or similar which is not > > > > > > > > accessible in PCI power states below D0. > > > > > > > > > > > > > > Or the PCI config space of the GPU when the parent root port is > > > > > > > in D3hot > > > > > > > (as it is the case here). Also then the GPU config space is not > > > > > > > accessible. > > > > > > > > > > > > Why would the parent port be in D3hot at that point? Wouldn't that > > > > > > be > > > > > > a suspend ordering violation? > > > > > > > > > > No. We put the GPU into D3hot first, > > > > > > OK > > > > > > Does this involve any AML, like a _PS3 under the GPU object? > > > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU > > itself is not described in ACPI tables at all. > > OK > > > > > > then the root port and then turn > > > > > off the power resource (which is attached to the root port) resulting > > > > > the topology entering D3cold. > > > > > > > > I don't see that happening in the AML though. > > > > > > Which AML do you mean, specifically? The _OFF method for the root > > > port's _PR3 power resource or something else? > > > > The root port's _OFF method for the power resource returned by its _PR3. > > OK, so without the $subject patch we (1) program the downstream > component (GPU) into D3hot, then we (2) program the port holding it > into D3hot and then we (3) let the AML (_OFF for the power resource > listed by _PR3 under the port object) run. > > Something strange happens at this point (and I guess that _OFF doesn't > even reach the point where it removes power from the port which is why > we see a lock-up). > it does though. I will post the data shortly (with the change in power consumption), as I also want to do the ACPI traces now. > We know that skipping (1) makes things work and we kind of suspect > that skipping (3) would make things work either, but what about doing > (1) and (3) without (2)? > > > > > Basically the difference is that when Windows 7 or Linux (the _REV==5 > > > > check) then we directly do link disable whereas in Windows 8+ we invoke > > > > LKDS() method that puts the link into L2/L3. None of the fields they > > > > access seem to touch the GPU itself. > > > > > > So that may be where the problem is. > > > > > > Putting the downstream component into PCI D[1-3] is expected to put > > > the link into L1, so I'm not sure how that plays with the later > > > attempt to put it into L2/L3 Ready. > > > > That should be fine. What I've seen the link goes into L1 when > > downstream component is put to D-state (not D0) and then it is put back > > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3. > > Well, that's the expected behavior, but the observed behavior isn't as > expected. :-) > > > > Also, L2/L3 Ready
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 8:49 PM Mika Westerberg wrote: > > On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote: > > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg > > wrote: > > > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote: > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > > > wrote: > > > > > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > > > > > > last week or so I found systems where the GPU was under the "PCI > > > > > > > > Express Root Port" (name from lspci) and on those systems all > > > > > > > > of that > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 > > > > > > > > one, > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as > > > > > > > > devices > > > > > > > > never get populated under this particular bridge controller, > > > > > > > > but under > > > > > > > > those "Root Port"s > > > > > > > > > > > > > > It always is a PCIe port, but its location within the SoC may > > > > > > > matter. > > > > > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP > > > > > > is > > > > > > still the same. > > > > > > > > > > > > > Also some custom AML-based power management is involved and that > > > > > > > may > > > > > > > be making specific assumptions on the configuration of the SoC > > > > > > > and the > > > > > > > GPU at the time of its invocation which unfortunately are not > > > > > > > known to > > > > > > > us. > > > > > > > > > > > > > > However, it looks like the AML invoked to power down the GPU from > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > > > > > > > that point, so it looks like that AML tries to access device > > > > > > > memory on > > > > > > > the GPU (beyond the PCI config space) or similar which is not > > > > > > > accessible in PCI power states below D0. > > > > > > > > > > > > Or the PCI config space of the GPU when the parent root port is in > > > > > > D3hot > > > > > > (as it is the case here). Also then the GPU config space is not > > > > > > accessible. > > > > > > > > > > Why would the parent port be in D3hot at that point? Wouldn't that be > > > > > a suspend ordering violation? > > > > > > > > No. We put the GPU into D3hot first, > > > > OK > > > > Does this involve any AML, like a _PS3 under the GPU object? > > I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU > itself is not described in ACPI tables at all. OK > > > > then the root port and then turn > > > > off the power resource (which is attached to the root port) resulting > > > > the topology entering D3cold. > > > > > > I don't see that happening in the AML though. > > > > Which AML do you mean, specifically? The _OFF method for the root > > port's _PR3 power resource or something else? > > The root port's _OFF method for the power resource returned by its _PR3. OK, so without the $subject patch we (1) program the downstream component (GPU) into D3hot, then we (2) program the port holding it into D3hot and then we (3) let the AML (_OFF for the power resource listed by _PR3 under the port object) run. Something strange happens at this point (and I guess that _OFF doesn't even reach the point where it removes power from the port which is why we see a lock-up). We know that skipping (1) makes things work and we kind of suspect that skipping (3) would make things work either, but what about doing (1) and (3) without (2)? > > > Basically the difference is that when Windows 7 or Linux (the _REV==5 > > > check) then we directly do link disable whereas in Windows 8+ we invoke > > > LKDS() method that puts the link into L2/L3. None of the fields they > > > access seem to touch the GPU itself. > > > > So that may be where the problem is. > > > > Putting the downstream component into PCI D[1-3] is expected to put > > the link into L1, so I'm not sure how that plays with the later > > attempt to put it into L2/L3 Ready. > > That should be fine. What I've seen the link goes into L1 when > downstream component is put to D-state (not D0) and then it is put back > to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3. Well, that's the expected behavior, but the observed behavior isn't as expected. :-) > > Also, L2/L3 Ready is expected to be transient, so finally power should > > be removed somehow. > > There is GPIO for both power and PERST, I think the line here: > > \_SB.SGOV (0x01010004, Zero) > > is the one that removes power. OK > > > LKDS() for the first PEG port looks like this: > > > > > >P0L2 = One > > >Sleep (0x10) > > >Local0 = Zero > > >While (P0L2) > > >{ > > > If ((Local0 > 0x04)) > > > { > > > Break > > > } > > > > > >
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 04:43:24PM +0100, Rafael J. Wysocki wrote: > On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg > wrote: > > > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote: > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > > wrote: > > > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > > > > > last week or so I found systems where the GPU was under the "PCI > > > > > > > Express Root Port" (name from lspci) and on those systems all of > > > > > > > that > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 > > > > > > > one, > > > > > > > which also explains Mikas case that Thunderbolt stuff works as > > > > > > > devices > > > > > > > never get populated under this particular bridge controller, but > > > > > > > under > > > > > > > those "Root Port"s > > > > > > > > > > > > It always is a PCIe port, but its location within the SoC may > > > > > > matter. > > > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is > > > > > still the same. > > > > > > > > > > > Also some custom AML-based power management is involved and that may > > > > > > be making specific assumptions on the configuration of the SoC and > > > > > > the > > > > > > GPU at the time of its invocation which unfortunately are not known > > > > > > to > > > > > > us. > > > > > > > > > > > > However, it looks like the AML invoked to power down the GPU from > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > > > > > > that point, so it looks like that AML tries to access device memory > > > > > > on > > > > > > the GPU (beyond the PCI config space) or similar which is not > > > > > > accessible in PCI power states below D0. > > > > > > > > > > Or the PCI config space of the GPU when the parent root port is in > > > > > D3hot > > > > > (as it is the case here). Also then the GPU config space is not > > > > > accessible. > > > > > > > > Why would the parent port be in D3hot at that point? Wouldn't that be > > > > a suspend ordering violation? > > > > > > No. We put the GPU into D3hot first, > > OK > > Does this involve any AML, like a _PS3 under the GPU object? I don't see _PS3 (nor _PS0) for that object. If I read it right the GPU itself is not described in ACPI tables at all. > > > then the root port and then turn > > > off the power resource (which is attached to the root port) resulting > > > the topology entering D3cold. > > > > I don't see that happening in the AML though. > > Which AML do you mean, specifically? The _OFF method for the root > port's _PR3 power resource or something else? The root port's _OFF method for the power resource returned by its _PR3. > > Basically the difference is that when Windows 7 or Linux (the _REV==5 > > check) then we directly do link disable whereas in Windows 8+ we invoke > > LKDS() method that puts the link into L2/L3. None of the fields they > > access seem to touch the GPU itself. > > So that may be where the problem is. > > Putting the downstream component into PCI D[1-3] is expected to put > the link into L1, so I'm not sure how that plays with the later > attempt to put it into L2/L3 Ready. That should be fine. What I've seen the link goes into L1 when downstream component is put to D-state (not D0) and then it is put back to L0 when L2/3 ready is propagated. Eventually it goes into L2 or L3. > Also, L2/L3 Ready is expected to be transient, so finally power should > be removed somehow. There is GPIO for both power and PERST, I think the line here: \_SB.SGOV (0x01010004, Zero) is the one that removes power. > > LKDS() for the first PEG port looks like this: > > > >P0L2 = One > >Sleep (0x10) > >Local0 = Zero > >While (P0L2) > >{ > > If ((Local0 > 0x04)) > > { > > Break > > } > > > > Sleep (0x10) > > Local0++ > >} > > > > One thing that comes to mind is that the loop can end even if P0L2 is > > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe > > Sleep() is implemented differently in Windows? I mean Linux may be > > "faster" here and return prematurely and if we leave the port into D0 > > this does not happen, or something. I'm just throwing out ideas :) > > But this actually works for the downstream component in D0, doesn't it? It does and that leaves the link in L0 so it could be that then the above AML works better or something. That reminds me, ASPM may have something to do with this as well. > Also, if the downstream component is in D0, the port actually should > stay in D0 too, so what would happen with the $subject patch applied? Parent port cannot be lower D-state than the child so I agree it should stay in D0 as well. However, it seems that what happens
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 5:06 PM Karol Herbst wrote: > > On Thu, Nov 21, 2019 at 4:47 PM Rafael J. Wysocki wrote: > > > > On Thu, Nov 21, 2019 at 1:53 PM Karol Herbst wrote: > > > > > > On Thu, Nov 21, 2019 at 12:46 PM Mika Westerberg > > > wrote: > > > > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > > > wrote: > > > > > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > > > > > > last week or so I found systems where the GPU was under the "PCI > > > > > > > > Express Root Port" (name from lspci) and on those systems all > > > > > > > > of that > > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 > > > > > > > > one, > > > > > > > > which also explains Mikas case that Thunderbolt stuff works as > > > > > > > > devices > > > > > > > > never get populated under this particular bridge controller, > > > > > > > > but under > > > > > > > > those "Root Port"s > > > > > > > > > > > > > > It always is a PCIe port, but its location within the SoC may > > > > > > > matter. > > > > > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called > > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP > > > > > > is > > > > > > still the same. > > > > > > > > > > > > yeah, I meant the bridge controller with the ID 0x1901 is on the CPU > > > side. And if the Nvidia GPU is on a port on the PCH side it all seems > > > to work just fine. > > > > But that may involve different AML too, may it not? > > > > > > > > > Also some custom AML-based power management is involved and that > > > > > > > may > > > > > > > be making specific assumptions on the configuration of the SoC > > > > > > > and the > > > > > > > GPU at the time of its invocation which unfortunately are not > > > > > > > known to > > > > > > > us. > > > > > > > > > > > > > > However, it looks like the AML invoked to power down the GPU from > > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > > > > > > > that point, so it looks like that AML tries to access device > > > > > > > memory on > > > > > > > the GPU (beyond the PCI config space) or similar which is not > > > > > > > accessible in PCI power states below D0. > > > > > > > > > > > > Or the PCI config space of the GPU when the parent root port is in > > > > > > D3hot > > > > > > (as it is the case here). Also then the GPU config space is not > > > > > > accessible. > > > > > > > > > > Why would the parent port be in D3hot at that point? Wouldn't that be > > > > > a suspend ordering violation? > > > > > > > > No. We put the GPU into D3hot first, then the root port and then turn > > > > off the power resource (which is attached to the root port) resulting > > > > the topology entering D3cold. > > > > > > > > > > If the kernel does a D0 -> D3hot -> D0 cycle this works as well, but > > > the power savings are way lower, so I kind of prefer skipping D3hot > > > instead of D3cold. Skipping D3hot doesn't seem to make any difference > > > in power savings in my testing. > > > > OK > > > > What exactly did you do to skip D3cold in your testing? > > > > For that I poked into the PCI registers directly and skipped doing the > ACPI calls and simply checked for the idle power consumption on my > laptop. That doesn't involve the PCIe port PM, however. > But I guess I should retest with calling pci_d3cold_disable > from nouveau instead? Or is there a different preferable way of > testing this? There is a sysfs attribute called "d3cold_allowed" which can be used for "blocking" D3cold, so can you please retest using that? ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 4:47 PM Rafael J. Wysocki wrote: > > On Thu, Nov 21, 2019 at 1:53 PM Karol Herbst wrote: > > > > On Thu, Nov 21, 2019 at 12:46 PM Mika Westerberg > > wrote: > > > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > > wrote: > > > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > > > > > last week or so I found systems where the GPU was under the "PCI > > > > > > > Express Root Port" (name from lspci) and on those systems all of > > > > > > > that > > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 > > > > > > > one, > > > > > > > which also explains Mikas case that Thunderbolt stuff works as > > > > > > > devices > > > > > > > never get populated under this particular bridge controller, but > > > > > > > under > > > > > > > those "Root Port"s > > > > > > > > > > > > It always is a PCIe port, but its location within the SoC may > > > > > > matter. > > > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called > > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is > > > > > still the same. > > > > > > > > > yeah, I meant the bridge controller with the ID 0x1901 is on the CPU > > side. And if the Nvidia GPU is on a port on the PCH side it all seems > > to work just fine. > > But that may involve different AML too, may it not? > > > > > > > Also some custom AML-based power management is involved and that may > > > > > > be making specific assumptions on the configuration of the SoC and > > > > > > the > > > > > > GPU at the time of its invocation which unfortunately are not known > > > > > > to > > > > > > us. > > > > > > > > > > > > However, it looks like the AML invoked to power down the GPU from > > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > > > > > > that point, so it looks like that AML tries to access device memory > > > > > > on > > > > > > the GPU (beyond the PCI config space) or similar which is not > > > > > > accessible in PCI power states below D0. > > > > > > > > > > Or the PCI config space of the GPU when the parent root port is in > > > > > D3hot > > > > > (as it is the case here). Also then the GPU config space is not > > > > > accessible. > > > > > > > > Why would the parent port be in D3hot at that point? Wouldn't that be > > > > a suspend ordering violation? > > > > > > No. We put the GPU into D3hot first, then the root port and then turn > > > off the power resource (which is attached to the root port) resulting > > > the topology entering D3cold. > > > > > > > If the kernel does a D0 -> D3hot -> D0 cycle this works as well, but > > the power savings are way lower, so I kind of prefer skipping D3hot > > instead of D3cold. Skipping D3hot doesn't seem to make any difference > > in power savings in my testing. > > OK > > What exactly did you do to skip D3cold in your testing? > For that I poked into the PCI registers directly and skipped doing the ACPI calls and simply checked for the idle power consumption on my laptop. But I guess I should retest with calling pci_d3cold_disable from nouveau instead? Or is there a different preferable way of testing this? ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 1:53 PM Karol Herbst wrote: > > On Thu, Nov 21, 2019 at 12:46 PM Mika Westerberg > wrote: > > > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > wrote: > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > > > > last week or so I found systems where the GPU was under the "PCI > > > > > > Express Root Port" (name from lspci) and on those systems all of > > > > > > that > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one, > > > > > > which also explains Mikas case that Thunderbolt stuff works as > > > > > > devices > > > > > > never get populated under this particular bridge controller, but > > > > > > under > > > > > > those "Root Port"s > > > > > > > > > > It always is a PCIe port, but its location within the SoC may matter. > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is > > > > still the same. > > > > > > yeah, I meant the bridge controller with the ID 0x1901 is on the CPU > side. And if the Nvidia GPU is on a port on the PCH side it all seems > to work just fine. But that may involve different AML too, may it not? > > > > > Also some custom AML-based power management is involved and that may > > > > > be making specific assumptions on the configuration of the SoC and the > > > > > GPU at the time of its invocation which unfortunately are not known to > > > > > us. > > > > > > > > > > However, it looks like the AML invoked to power down the GPU from > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > > > > > that point, so it looks like that AML tries to access device memory on > > > > > the GPU (beyond the PCI config space) or similar which is not > > > > > accessible in PCI power states below D0. > > > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot > > > > (as it is the case here). Also then the GPU config space is not > > > > accessible. > > > > > > Why would the parent port be in D3hot at that point? Wouldn't that be > > > a suspend ordering violation? > > > > No. We put the GPU into D3hot first, then the root port and then turn > > off the power resource (which is attached to the root port) resulting > > the topology entering D3cold. > > > > If the kernel does a D0 -> D3hot -> D0 cycle this works as well, but > the power savings are way lower, so I kind of prefer skipping D3hot > instead of D3cold. Skipping D3hot doesn't seem to make any difference > in power savings in my testing. OK What exactly did you do to skip D3cold in your testing? ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg wrote: > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote: > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > wrote: > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > > > > last week or so I found systems where the GPU was under the "PCI > > > > > > Express Root Port" (name from lspci) and on those systems all of > > > > > > that > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one, > > > > > > which also explains Mikas case that Thunderbolt stuff works as > > > > > > devices > > > > > > never get populated under this particular bridge controller, but > > > > > > under > > > > > > those "Root Port"s > > > > > > > > > > It always is a PCIe port, but its location within the SoC may matter. > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is > > > > still the same. > > > > > > > > > Also some custom AML-based power management is involved and that may > > > > > be making specific assumptions on the configuration of the SoC and the > > > > > GPU at the time of its invocation which unfortunately are not known to > > > > > us. > > > > > > > > > > However, it looks like the AML invoked to power down the GPU from > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > > > > > that point, so it looks like that AML tries to access device memory on > > > > > the GPU (beyond the PCI config space) or similar which is not > > > > > accessible in PCI power states below D0. > > > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot > > > > (as it is the case here). Also then the GPU config space is not > > > > accessible. > > > > > > Why would the parent port be in D3hot at that point? Wouldn't that be > > > a suspend ordering violation? > > > > No. We put the GPU into D3hot first, OK Does this involve any AML, like a _PS3 under the GPU object? > > then the root port and then turn > > off the power resource (which is attached to the root port) resulting > > the topology entering D3cold. > > I don't see that happening in the AML though. Which AML do you mean, specifically? The _OFF method for the root port's _PR3 power resource or something else? > Basically the difference is that when Windows 7 or Linux (the _REV==5 > check) then we directly do link disable whereas in Windows 8+ we invoke > LKDS() method that puts the link into L2/L3. None of the fields they > access seem to touch the GPU itself. So that may be where the problem is. Putting the downstream component into PCI D[1-3] is expected to put the link into L1, so I'm not sure how that plays with the later attempt to put it into L2/L3 Ready. Also, L2/L3 Ready is expected to be transient, so finally power should be removed somehow. > LKDS() for the first PEG port looks like this: > >P0L2 = One >Sleep (0x10) >Local0 = Zero >While (P0L2) >{ > If ((Local0 > 0x04)) > { > Break > } > > Sleep (0x10) > Local0++ >} > > One thing that comes to mind is that the loop can end even if P0L2 is > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe > Sleep() is implemented differently in Windows? I mean Linux may be > "faster" here and return prematurely and if we leave the port into D0 > this does not happen, or something. I'm just throwing out ideas :) But this actually works for the downstream component in D0, doesn't it? Also, if the downstream component is in D0, the port actually should stay in D0 too, so what would happen with the $subject patch applied? ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg wrote: > > On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote: > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > > wrote: > > > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > > > > last week or so I found systems where the GPU was under the "PCI > > > > > > Express Root Port" (name from lspci) and on those systems all of > > > > > > that > > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one, > > > > > > which also explains Mikas case that Thunderbolt stuff works as > > > > > > devices > > > > > > never get populated under this particular bridge controller, but > > > > > > under > > > > > > those "Root Port"s > > > > > > > > > > It always is a PCIe port, but its location within the SoC may matter. > > > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called > > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is > > > > still the same. > > > > > > > > > Also some custom AML-based power management is involved and that may > > > > > be making specific assumptions on the configuration of the SoC and the > > > > > GPU at the time of its invocation which unfortunately are not known to > > > > > us. > > > > > > > > > > However, it looks like the AML invoked to power down the GPU from > > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > > > > > that point, so it looks like that AML tries to access device memory on > > > > > the GPU (beyond the PCI config space) or similar which is not > > > > > accessible in PCI power states below D0. > > > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot > > > > (as it is the case here). Also then the GPU config space is not > > > > accessible. > > > > > > Why would the parent port be in D3hot at that point? Wouldn't that be > > > a suspend ordering violation? > > > > No. We put the GPU into D3hot first, then the root port and then turn > > off the power resource (which is attached to the root port) resulting > > the topology entering D3cold. > > I don't see that happening in the AML though. > > Basically the difference is that when Windows 7 or Linux (the _REV==5 > check) then we directly do link disable whereas in Windows 8+ we invoke > LKDS() method that puts the link into L2/L3. None of the fields they > access seem to touch the GPU itself. > > LKDS() for the first PEG port looks like this: > >P0L2 = One >Sleep (0x10) >Local0 = Zero >While (P0L2) >{ > If ((Local0 > 0x04)) > { > Break > } > > Sleep (0x10) > Local0++ >} > > One thing that comes to mind is that the loop can end even if P0L2 is > not cleared as it does only 5 iterations with 16 ms sleep between. Maybe > Sleep() is implemented differently in Windows? I mean Linux may be > "faster" here and return prematurely and if we leave the port into D0 > this does not happen, or something. I'm just throwing out ideas :) > keep in mind, that I am able to hit this bug with my python script: https://raw.githubusercontent.com/karolherbst/pci-stub-runpm/master/nv_runpm_bug_test.py ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote: > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > wrote: > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > > > last week or so I found systems where the GPU was under the "PCI > > > > > Express Root Port" (name from lspci) and on those systems all of that > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one, > > > > > which also explains Mikas case that Thunderbolt stuff works as devices > > > > > never get populated under this particular bridge controller, but under > > > > > those "Root Port"s > > > > > > > > It always is a PCIe port, but its location within the SoC may matter. > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is > > > still the same. > > > > > > > Also some custom AML-based power management is involved and that may > > > > be making specific assumptions on the configuration of the SoC and the > > > > GPU at the time of its invocation which unfortunately are not known to > > > > us. > > > > > > > > However, it looks like the AML invoked to power down the GPU from > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > > > > that point, so it looks like that AML tries to access device memory on > > > > the GPU (beyond the PCI config space) or similar which is not > > > > accessible in PCI power states below D0. > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot > > > (as it is the case here). Also then the GPU config space is not > > > accessible. > > > > Why would the parent port be in D3hot at that point? Wouldn't that be > > a suspend ordering violation? > > No. We put the GPU into D3hot first, then the root port and then turn > off the power resource (which is attached to the root port) resulting > the topology entering D3cold. I don't see that happening in the AML though. Basically the difference is that when Windows 7 or Linux (the _REV==5 check) then we directly do link disable whereas in Windows 8+ we invoke LKDS() method that puts the link into L2/L3. None of the fields they access seem to touch the GPU itself. LKDS() for the first PEG port looks like this: P0L2 = One Sleep (0x10) Local0 = Zero While (P0L2) { If ((Local0 > 0x04)) { Break } Sleep (0x10) Local0++ } One thing that comes to mind is that the loop can end even if P0L2 is not cleared as it does only 5 iterations with 16 ms sleep between. Maybe Sleep() is implemented differently in Windows? I mean Linux may be "faster" here and return prematurely and if we leave the port into D0 this does not happen, or something. I'm just throwing out ideas :) ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 12:46 PM Mika Westerberg wrote: > > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > > wrote: > > > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > > > last week or so I found systems where the GPU was under the "PCI > > > > > Express Root Port" (name from lspci) and on those systems all of that > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one, > > > > > which also explains Mikas case that Thunderbolt stuff works as devices > > > > > never get populated under this particular bridge controller, but under > > > > > those "Root Port"s > > > > > > > > It always is a PCIe port, but its location within the SoC may matter. > > > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is > > > still the same. > > > yeah, I meant the bridge controller with the ID 0x1901 is on the CPU side. And if the Nvidia GPU is on a port on the PCH side it all seems to work just fine. > > > > Also some custom AML-based power management is involved and that may > > > > be making specific assumptions on the configuration of the SoC and the > > > > GPU at the time of its invocation which unfortunately are not known to > > > > us. > > > > > > > > However, it looks like the AML invoked to power down the GPU from > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > > > > that point, so it looks like that AML tries to access device memory on > > > > the GPU (beyond the PCI config space) or similar which is not > > > > accessible in PCI power states below D0. > > > > > > Or the PCI config space of the GPU when the parent root port is in D3hot > > > (as it is the case here). Also then the GPU config space is not > > > accessible. > > > > Why would the parent port be in D3hot at that point? Wouldn't that be > > a suspend ordering violation? > > No. We put the GPU into D3hot first, then the root port and then turn > off the power resource (which is attached to the root port) resulting > the topology entering D3cold. > If the kernel does a D0 -> D3hot -> D0 cycle this works as well, but the power savings are way lower, so I kind of prefer skipping D3hot instead of D3cold. Skipping D3hot doesn't seem to make any difference in power savings in my testing. > > > I took a look at the HP Omen ACPI tables which has similar problem and > > > there is also check for Windows 7 (but not Linux) so I think one > > > alternative workaround would be to add these devices into > > > acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or > > > pass 'acpi_osi="!Windows 2012"' in the kernel command line). > > > > I'd like to understand the facts that have been established so far > > before deciding what to do about them. :-) > > Yes, I agree :) > Yeah.. and I think those would be too many as we know of several models with this laptops from Lenovo, Dell and HP and random other models from random other OEMs. I think we won't ever be able to blacklist all models if we go that way as those might be just way too many. Also I know from some reports on bumblebee bugs (hitting the same issue essentially) that the acpi_osi overwrite didn't help every user. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote: > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg > wrote: > > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > > last week or so I found systems where the GPU was under the "PCI > > > > Express Root Port" (name from lspci) and on those systems all of that > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one, > > > > which also explains Mikas case that Thunderbolt stuff works as devices > > > > never get populated under this particular bridge controller, but under > > > > those "Root Port"s > > > > > > It always is a PCIe port, but its location within the SoC may matter. > > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is > > still the same. > > > > > Also some custom AML-based power management is involved and that may > > > be making specific assumptions on the configuration of the SoC and the > > > GPU at the time of its invocation which unfortunately are not known to > > > us. > > > > > > However, it looks like the AML invoked to power down the GPU from > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > > > that point, so it looks like that AML tries to access device memory on > > > the GPU (beyond the PCI config space) or similar which is not > > > accessible in PCI power states below D0. > > > > Or the PCI config space of the GPU when the parent root port is in D3hot > > (as it is the case here). Also then the GPU config space is not > > accessible. > > Why would the parent port be in D3hot at that point? Wouldn't that be > a suspend ordering violation? No. We put the GPU into D3hot first, then the root port and then turn off the power resource (which is attached to the root port) resulting the topology entering D3cold. > > I took a look at the HP Omen ACPI tables which has similar problem and > > there is also check for Windows 7 (but not Linux) so I think one > > alternative workaround would be to add these devices into > > acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or > > pass 'acpi_osi="!Windows 2012"' in the kernel command line). > > I'd like to understand the facts that have been established so far > before deciding what to do about them. :-) Yes, I agree :) ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg wrote: > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > > last week or so I found systems where the GPU was under the "PCI > > > Express Root Port" (name from lspci) and on those systems all of that > > > seems to work. So I am wondering if it's indeed just the 0x1901 one, > > > which also explains Mikas case that Thunderbolt stuff works as devices > > > never get populated under this particular bridge controller, but under > > > those "Root Port"s > > > > It always is a PCIe port, but its location within the SoC may matter. > > Exactly. Intel hardware has PCIe ports on CPU side (these are called > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is > still the same. > > > Also some custom AML-based power management is involved and that may > > be making specific assumptions on the configuration of the SoC and the > > GPU at the time of its invocation which unfortunately are not known to > > us. > > > > However, it looks like the AML invoked to power down the GPU from > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > > that point, so it looks like that AML tries to access device memory on > > the GPU (beyond the PCI config space) or similar which is not > > accessible in PCI power states below D0. > > Or the PCI config space of the GPU when the parent root port is in D3hot > (as it is the case here). Also then the GPU config space is not > accessible. Why would the parent port be in D3hot at that point? Wouldn't that be a suspend ordering violation? > I took a look at the HP Omen ACPI tables which has similar problem and > there is also check for Windows 7 (but not Linux) so I think one > alternative workaround would be to add these devices into > acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or > pass 'acpi_osi="!Windows 2012"' in the kernel command line). I'd like to understand the facts that have been established so far before deciding what to do about them. :-) ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 12:17 PM Mika Westerberg wrote: > > On Thu, Nov 21, 2019 at 12:03:52PM +0100, Rafael J. Wysocki wrote: > > On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg > > wrote: > > > > > > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote: > > > > with the branch and patch applied: > > > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt > > > > > > Thanks for testing. Too bad it did not help :( I suppose there is no > > > change if you increase the delay to say 1s? > > > > Well, look at the original patch in this thread. > > > > What it does is to prevent the device (GPU in this particular case) > > from going into a PCI low-power state before invoking AML to power it > > down (the AML is still invoked after this patch AFAICS), so why would > > that have anything to do with the delays? > > Yes, I know what it does :) I was just thinking that maybe it's still > the link that does not come up when we go back to D0 I guess that's not > the case here. I'm not sure why that would be related to putting the device into, say, PCI D3 before invoking AML to remove power from it. If it is not in PCI D3 at this point, the AML still runs and still removes power from it IIUC, so on the way back the situation is the same regardless: the device has no power which (again) needs to be restored by AML. That (in principle) should not depend on what happened to the device before it lost power. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote: > > last week or so I found systems where the GPU was under the "PCI > > Express Root Port" (name from lspci) and on those systems all of that > > seems to work. So I am wondering if it's indeed just the 0x1901 one, > > which also explains Mikas case that Thunderbolt stuff works as devices > > never get populated under this particular bridge controller, but under > > those "Root Port"s > > It always is a PCIe port, but its location within the SoC may matter. Exactly. Intel hardware has PCIe ports on CPU side (these are called PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is still the same. > Also some custom AML-based power management is involved and that may > be making specific assumptions on the configuration of the SoC and the > GPU at the time of its invocation which unfortunately are not known to > us. > > However, it looks like the AML invoked to power down the GPU from > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > that point, so it looks like that AML tries to access device memory on > the GPU (beyond the PCI config space) or similar which is not > accessible in PCI power states below D0. Or the PCI config space of the GPU when the parent root port is in D3hot (as it is the case here). Also then the GPU config space is not accessible. I took a look at the HP Omen ACPI tables which has similar problem and there is also check for Windows 7 (but not Linux) so I think one alternative workaround would be to add these devices into acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or pass 'acpi_osi="!Windows 2012"' in the kernel command line). ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 12:03:52PM +0100, Rafael J. Wysocki wrote: > On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg > wrote: > > > > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote: > > > with the branch and patch applied: > > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt > > > > Thanks for testing. Too bad it did not help :( I suppose there is no > > change if you increase the delay to say 1s? > > Well, look at the original patch in this thread. > > What it does is to prevent the device (GPU in this particular case) > from going into a PCI low-power state before invoking AML to power it > down (the AML is still invoked after this patch AFAICS), so why would > that have anything to do with the delays? Yes, I know what it does :) I was just thinking that maybe it's still the link that does not come up when we go back to D0 I guess that's not the case here. > The only reason would be the AML running too early, but that doesn't > seem likely. IMO more likely is that the AML does something which > cannot be done to a device in a PCI low-power state. It may very well be the case. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 12:08 PM Rafael J. Wysocki wrote: > > On Thu, Nov 21, 2019 at 12:03 PM Rafael J. Wysocki wrote: > > > > On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg > > wrote: > > > > > > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote: > > > > with the branch and patch applied: > > > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt > > > > > > Thanks for testing. Too bad it did not help :( I suppose there is no > > > change if you increase the delay to say 1s? > > > > Well, look at the original patch in this thread. > > > > What it does is to prevent the device (GPU in this particular case) > > from going into a PCI low-power state before invoking AML to power it > > down (the AML is still invoked after this patch AFAICS), so why would > > that have anything to do with the delays? > > > > The only reason would be the AML running too early, but that doesn't > > seem likely. IMO more likely is that the AML does something which > > cannot be done to a device in a PCI low-power state. > > BTW, I'm wondering if anyone has tried to skip the AML instead of > skipping the PCI PM in this case (as of 5.4-rc that would be a similar > patch to skip the invocations of > __pci_start/complete_power_transition() in pci_set_power_state() for > the affected device). Moving the dev->broken_nv_runpm test into pci_platform_power_transition() (also for transitions into D0) would be sufficient for that test if I'm not mistaken. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 12:03 PM Rafael J. Wysocki wrote: > > On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg > wrote: > > > > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote: > > > with the branch and patch applied: > > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt > > > > Thanks for testing. Too bad it did not help :( I suppose there is no > > change if you increase the delay to say 1s? > > Well, look at the original patch in this thread. > > What it does is to prevent the device (GPU in this particular case) > from going into a PCI low-power state before invoking AML to power it > down (the AML is still invoked after this patch AFAICS), so why would > that have anything to do with the delays? > > The only reason would be the AML running too early, but that doesn't > seem likely. IMO more likely is that the AML does something which > cannot be done to a device in a PCI low-power state. BTW, I'm wondering if anyone has tried to skip the AML instead of skipping the PCI PM in this case (as of 5.4-rc that would be a similar patch to skip the invocations of __pci_start/complete_power_transition() in pci_set_power_state() for the affected device). ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, Nov 21, 2019 at 11:14 AM Mika Westerberg wrote: > > On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote: > > with the branch and patch applied: > > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt > > Thanks for testing. Too bad it did not help :( I suppose there is no > change if you increase the delay to say 1s? Well, look at the original patch in this thread. What it does is to prevent the device (GPU in this particular case) from going into a PCI low-power state before invoking AML to power it down (the AML is still invoked after this patch AFAICS), so why would that have anything to do with the delays? The only reason would be the AML running too early, but that doesn't seem likely. IMO more likely is that the AML does something which cannot be done to a device in a PCI low-power state. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 10:36:31PM +0100, Karol Herbst wrote: > with the branch and patch applied: > https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt Thanks for testing. Too bad it did not help :( I suppose there is no change if you increase the delay to say 1s? ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 10:40 PM Karol Herbst wrote: > > On Wed, Nov 20, 2019 at 10:37 PM Rafael J. Wysocki wrote: > > > > On Wed, Nov 20, 2019 at 4:53 PM Mika Westerberg > > wrote: > > > > > > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote: > > > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg > > > > wrote: > > > > > > > > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote: > > > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg > > > > > > wrote: > > > > > > > > > > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote: > > > > > > > > overall, what I really want to know is, _why_ does it work on > > > > > > > > windows? > > > > > > > > > > > > > > So do I ;-) > > > > > > > > > > > > > > > Or what are we doing differently on Linux so that it doesn't > > > > > > > > work? If > > > > > > > > anybody has any idea on how we could dig into this and figure > > > > > > > > it out > > > > > > > > on this level, this would probably allow us to get closer to > > > > > > > > the root > > > > > > > > cause? no? > > > > > > > > > > > > > > Have you tried to use the acpi_rev_override parameter in your > > > > > > > system and > > > > > > > does it have any effect? > > > > > > > > > > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think > > > > > > > that > > > > > > > should hopefully reveal something. > > > > > > > > > > > > > > > > > > > I think I did in the past and it seemed to have worked, there is > > > > > > just > > > > > > one big issue with this: it's a Dell specific workaround afaik, and > > > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo > > > > > > laptops as well, and I've heard about users having the same issues > > > > > > on > > > > > > Asus and MSI laptops as well. > > > > > > > > > > Maybe it is not a workaround at all but instead it simply determines > > > > > whether the system supports RTD3 or something like that (IIRC Windows > > > > > 8 > > > > > started supporting it). Maybe Dell added check for Linux because at > > > > > that > > > > > time Linux did not support it. > > > > > > > > > > > > > the point is, it's not checking it by default, so by default you still > > > > run into the windows 8 codepath. > > > > > > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that > > > path by default. There are a bunch of similar entries for Dell machines. > > > > OK, so the "Linux path" works and the other doesn't. > > > > I thought that this was the other way around, sorry for the confusion. > > > > > Of course this does not help the non-Dell users so we would still need > > > to figure out the root cause. > > > > Right. > > > > Whatever it is, though, AML appears to be involved in it and AFAICS > > there's no evidence that it affects any root ports that are not > > populated with NVidia GPUs. > > > > last week or so I found systems where the GPU was under the "PCI > Express Root Port" (name from lspci) and on those systems all of that > seems to work. So I am wondering if it's indeed just the 0x1901 one, > which also explains Mikas case that Thunderbolt stuff works as devices > never get populated under this particular bridge controller, but under > those "Root Port"s It always is a PCIe port, but its location within the SoC may matter. Also some custom AML-based power management is involved and that may be making specific assumptions on the configuration of the SoC and the GPU at the time of its invocation which unfortunately are not known to us. However, it looks like the AML invoked to power down the GPU from acpi_pci_set_power_state() gets confused if it is not in PCI D0 at that point, so it looks like that AML tries to access device memory on the GPU (beyond the PCI config space) or similar which is not accessible in PCI power states below D0. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 10:37 PM Rafael J. Wysocki wrote: > > On Wed, Nov 20, 2019 at 4:53 PM Mika Westerberg > wrote: > > > > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote: > > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg > > > wrote: > > > > > > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote: > > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg > > > > > wrote: > > > > > > > > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote: > > > > > > > overall, what I really want to know is, _why_ does it work on > > > > > > > windows? > > > > > > > > > > > > So do I ;-) > > > > > > > > > > > > > Or what are we doing differently on Linux so that it doesn't > > > > > > > work? If > > > > > > > anybody has any idea on how we could dig into this and figure it > > > > > > > out > > > > > > > on this level, this would probably allow us to get closer to the > > > > > > > root > > > > > > > cause? no? > > > > > > > > > > > > Have you tried to use the acpi_rev_override parameter in your > > > > > > system and > > > > > > does it have any effect? > > > > > > > > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that > > > > > > should hopefully reveal something. > > > > > > > > > > > > > > > > I think I did in the past and it seemed to have worked, there is just > > > > > one big issue with this: it's a Dell specific workaround afaik, and > > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo > > > > > laptops as well, and I've heard about users having the same issues on > > > > > Asus and MSI laptops as well. > > > > > > > > Maybe it is not a workaround at all but instead it simply determines > > > > whether the system supports RTD3 or something like that (IIRC Windows 8 > > > > started supporting it). Maybe Dell added check for Linux because at that > > > > time Linux did not support it. > > > > > > > > > > the point is, it's not checking it by default, so by default you still > > > run into the windows 8 codepath. > > > > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that > > path by default. There are a bunch of similar entries for Dell machines. > > OK, so the "Linux path" works and the other doesn't. > > I thought that this was the other way around, sorry for the confusion. > > > Of course this does not help the non-Dell users so we would still need > > to figure out the root cause. > > Right. > > Whatever it is, though, AML appears to be involved in it and AFAICS > there's no evidence that it affects any root ports that are not > populated with NVidia GPUs. > last week or so I found systems where the GPU was under the "PCI Express Root Port" (name from lspci) and on those systems all of that seems to work. So I am wondering if it's indeed just the 0x1901 one, which also explains Mikas case that Thunderbolt stuff works as devices never get populated under this particular bridge controller, but under those "Root Port"s > Now, one thing is still not clear to me from the discussion so far: is > the _PR3 method you mentioned defined under the GPU device object or > under the port device object? > ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 4:53 PM Mika Westerberg wrote: > > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote: > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg > > wrote: > > > > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote: > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg > > > > wrote: > > > > > > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote: > > > > > > overall, what I really want to know is, _why_ does it work on > > > > > > windows? > > > > > > > > > > So do I ;-) > > > > > > > > > > > Or what are we doing differently on Linux so that it doesn't work? > > > > > > If > > > > > > anybody has any idea on how we could dig into this and figure it out > > > > > > on this level, this would probably allow us to get closer to the > > > > > > root > > > > > > cause? no? > > > > > > > > > > Have you tried to use the acpi_rev_override parameter in your system > > > > > and > > > > > does it have any effect? > > > > > > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that > > > > > should hopefully reveal something. > > > > > > > > > > > > > I think I did in the past and it seemed to have worked, there is just > > > > one big issue with this: it's a Dell specific workaround afaik, and > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo > > > > laptops as well, and I've heard about users having the same issues on > > > > Asus and MSI laptops as well. > > > > > > Maybe it is not a workaround at all but instead it simply determines > > > whether the system supports RTD3 or something like that (IIRC Windows 8 > > > started supporting it). Maybe Dell added check for Linux because at that > > > time Linux did not support it. > > > > > > > the point is, it's not checking it by default, so by default you still > > run into the windows 8 codepath. > > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that > path by default. There are a bunch of similar entries for Dell machines. OK, so the "Linux path" works and the other doesn't. I thought that this was the other way around, sorry for the confusion. > Of course this does not help the non-Dell users so we would still need > to figure out the root cause. Right. Whatever it is, though, AML appears to be involved in it and AFAICS there's no evidence that it affects any root ports that are not populated with NVidia GPUs. Now, one thing is still not clear to me from the discussion so far: is the _PR3 method you mentioned defined under the GPU device object or under the port device object? ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
with the branch and patch applied: https://gist.githubusercontent.com/karolherbst/03c4c8141b0fa292d781badfa186479e/raw/5c62640afbc57d6e69ea924c338bd2836e770d02/gistfile1.txt On Wed, Nov 20, 2019 at 5:23 PM Mika Westerberg wrote: > > On Wed, Nov 20, 2019 at 05:53:07PM +0200, Mika Westerberg wrote: > > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote: > > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg > > > wrote: > > > > > > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote: > > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg > > > > > wrote: > > > > > > > > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote: > > > > > > > overall, what I really want to know is, _why_ does it work on > > > > > > > windows? > > > > > > > > > > > > So do I ;-) > > > > > > > > > > > > > Or what are we doing differently on Linux so that it doesn't > > > > > > > work? If > > > > > > > anybody has any idea on how we could dig into this and figure it > > > > > > > out > > > > > > > on this level, this would probably allow us to get closer to the > > > > > > > root > > > > > > > cause? no? > > > > > > > > > > > > Have you tried to use the acpi_rev_override parameter in your > > > > > > system and > > > > > > does it have any effect? > > > > > > > > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that > > > > > > should hopefully reveal something. > > > > > > > > > > > > > > > > I think I did in the past and it seemed to have worked, there is just > > > > > one big issue with this: it's a Dell specific workaround afaik, and > > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo > > > > > laptops as well, and I've heard about users having the same issues on > > > > > Asus and MSI laptops as well. > > > > > > > > Maybe it is not a workaround at all but instead it simply determines > > > > whether the system supports RTD3 or something like that (IIRC Windows 8 > > > > started supporting it). Maybe Dell added check for Linux because at that > > > > time Linux did not support it. > > > > > > > > > > the point is, it's not checking it by default, so by default you still > > > run into the windows 8 codepath. > > > > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that > > path by default. There are a bunch of similar entries for Dell machines. > > > > Of course this does not help the non-Dell users so we would still need > > to figure out the root cause. > > I think I asked you to test the PCIe delay patch and it did not help but > I wonder if it helps if we increase the delay. As an experiment could > you try Bjorn's pci/pm branch. The last two commits are for the delay. > > If you could pull that branch and apply the following patch on top and > give it a try? Then post the dmesg somewhere so we can see whether it > did the delay at all. > > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c > index 1f319b1175da..1ad6f1372ed5 100644 > --- a/drivers/pci/pci.c > +++ b/drivers/pci/pci.c > @@ -4697,12 +4697,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev > *dev) > return; > } > > - /* Take d3cold_delay requirements into account */ > - delay = pci_bus_max_d3cold_delay(dev->subordinate); > - if (!delay) { > - up_read(&pci_bus_sem); > - return; > - } > + delay = 500; > > child = list_first_entry(&dev->subordinate->devices, struct pci_dev, > bus_list); > @@ -4715,7 +4710,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev > *dev) > * management for them (see pci_bridge_d3_possible()). > */ > if (!pci_is_pcie(dev)) { > - pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + > delay); > + pci_info(dev, "waiting %d ms for secondary bus\n", 1000 + > delay); > msleep(1000 + delay); > return; > } > @@ -4741,10 +4736,10 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev > *dev) > return; > > if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) { > - pci_dbg(dev, "waiting %d ms for downstream link\n", delay); > + pci_info(dev, "waiting %d ms for downstream link\n", delay); > msleep(delay); > } else { > - pci_dbg(dev, "waiting %d ms for downstream link, after > activation\n", > + pci_info(dev, "waiting %d ms for downstream link, after > activation\n", > delay); > if (!pcie_wait_for_link_delay(dev, true, delay)) { > /* Did not train, no need to wait any further */ > @@ -4753,7 +4748,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev > *dev) > } > > if (!pci_device_is_present(child)) { > - pci_dbg(child, "waiting additional %d ms to become > accessible\n", delay); > + pc
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 05:53:07PM +0200, Mika Westerberg wrote: > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote: > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg > > wrote: > > > > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote: > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg > > > > wrote: > > > > > > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote: > > > > > > overall, what I really want to know is, _why_ does it work on > > > > > > windows? > > > > > > > > > > So do I ;-) > > > > > > > > > > > Or what are we doing differently on Linux so that it doesn't work? > > > > > > If > > > > > > anybody has any idea on how we could dig into this and figure it out > > > > > > on this level, this would probably allow us to get closer to the > > > > > > root > > > > > > cause? no? > > > > > > > > > > Have you tried to use the acpi_rev_override parameter in your system > > > > > and > > > > > does it have any effect? > > > > > > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that > > > > > should hopefully reveal something. > > > > > > > > > > > > > I think I did in the past and it seemed to have worked, there is just > > > > one big issue with this: it's a Dell specific workaround afaik, and > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo > > > > laptops as well, and I've heard about users having the same issues on > > > > Asus and MSI laptops as well. > > > > > > Maybe it is not a workaround at all but instead it simply determines > > > whether the system supports RTD3 or something like that (IIRC Windows 8 > > > started supporting it). Maybe Dell added check for Linux because at that > > > time Linux did not support it. > > > > > > > the point is, it's not checking it by default, so by default you still > > run into the windows 8 codepath. > > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that > path by default. There are a bunch of similar entries for Dell machines. > > Of course this does not help the non-Dell users so we would still need > to figure out the root cause. I think I asked you to test the PCIe delay patch and it did not help but I wonder if it helps if we increase the delay. As an experiment could you try Bjorn's pci/pm branch. The last two commits are for the delay. If you could pull that branch and apply the following patch on top and give it a try? Then post the dmesg somewhere so we can see whether it did the delay at all. diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 1f319b1175da..1ad6f1372ed5 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -4697,12 +4697,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev) return; } - /* Take d3cold_delay requirements into account */ - delay = pci_bus_max_d3cold_delay(dev->subordinate); - if (!delay) { - up_read(&pci_bus_sem); - return; - } + delay = 500; child = list_first_entry(&dev->subordinate->devices, struct pci_dev, bus_list); @@ -4715,7 +4710,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev) * management for them (see pci_bridge_d3_possible()). */ if (!pci_is_pcie(dev)) { - pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay); + pci_info(dev, "waiting %d ms for secondary bus\n", 1000 + delay); msleep(1000 + delay); return; } @@ -4741,10 +4736,10 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev) return; if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) { - pci_dbg(dev, "waiting %d ms for downstream link\n", delay); + pci_info(dev, "waiting %d ms for downstream link\n", delay); msleep(delay); } else { - pci_dbg(dev, "waiting %d ms for downstream link, after activation\n", + pci_info(dev, "waiting %d ms for downstream link, after activation\n", delay); if (!pcie_wait_for_link_delay(dev, true, delay)) { /* Did not train, no need to wait any further */ @@ -4753,7 +4748,7 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev) } if (!pci_device_is_present(child)) { - pci_dbg(child, "waiting additional %d ms to become accessible\n", delay); + pci_info(child, "waiting additional %d ms to become accessible\n", delay); msleep(delay); } } ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote: > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg > wrote: > > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote: > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg > > > wrote: > > > > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote: > > > > > overall, what I really want to know is, _why_ does it work on windows? > > > > > > > > So do I ;-) > > > > > > > > > Or what are we doing differently on Linux so that it doesn't work? If > > > > > anybody has any idea on how we could dig into this and figure it out > > > > > on this level, this would probably allow us to get closer to the root > > > > > cause? no? > > > > > > > > Have you tried to use the acpi_rev_override parameter in your system and > > > > does it have any effect? > > > > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that > > > > should hopefully reveal something. > > > > > > > > > > I think I did in the past and it seemed to have worked, there is just > > > one big issue with this: it's a Dell specific workaround afaik, and > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo > > > laptops as well, and I've heard about users having the same issues on > > > Asus and MSI laptops as well. > > > > Maybe it is not a workaround at all but instead it simply determines > > whether the system supports RTD3 or something like that (IIRC Windows 8 > > started supporting it). Maybe Dell added check for Linux because at that > > time Linux did not support it. > > > > the point is, it's not checking it by default, so by default you still > run into the windows 8 codepath. Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that path by default. There are a bunch of similar entries for Dell machines. Of course this does not help the non-Dell users so we would still need to figure out the root cause. > > In case RTD3 is supported it invokes LKDS() which probably does the L2 > > or L3 entry and this is for some reason does not work the same way in > > Linux than it does with Windows 8+. > > > > I don't remember if this happens only with nouveau or with the > > proprietary driver as well but looking at the nouveau runtime PM suspend > > hook (assuming I'm looking at the correct code): > > > > static int > > nouveau_pmops_runtime_suspend(struct device *dev) > > { > > struct pci_dev *pdev = to_pci_dev(dev); > > struct drm_device *drm_dev = pci_get_drvdata(pdev); > > int ret; > > > > if (!nouveau_pmops_runtime()) { > > pm_runtime_forbid(dev); > > return -EBUSY; > > } > > > > nouveau_switcheroo_optimus_dsm(); > > ret = nouveau_do_suspend(drm_dev, true); > > pci_save_state(pdev); > > pci_disable_device(pdev); > > pci_ignore_hotplug(pdev); > > pci_set_power_state(pdev, PCI_D3cold); > > drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF; > > return ret; > > } > > > > Normally PCI drivers leave the PCI bus PM things to PCI core but here > > the driver does these. So I wonder if it makes any difference if we let > > the core handle all that: > > > > static int > > nouveau_pmops_runtime_suspend(struct device *dev) > > { > > struct pci_dev *pdev = to_pci_dev(dev); > > struct drm_device *drm_dev = pci_get_drvdata(pdev); > > int ret; > > > > if (!nouveau_pmops_runtime()) { > > pm_runtime_forbid(dev); > > return -EBUSY; > > } > > > > nouveau_switcheroo_optimus_dsm(); > > ret = nouveau_do_suspend(drm_dev, true); > > pci_ignore_hotplug(pdev); > > drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF; > > return ret; > > } > > > > and similar for the nouveau_pmops_runtime_resume(). > > > > yeah, I tried that at some point and it didn't help either. The reason > we call those from inside Nouveau is to support systems pre _PR where > nouveau invokes custom _DSM calls on its own. We could potentially > check for that though. OK. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg wrote: > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote: > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg > > wrote: > > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote: > > > > overall, what I really want to know is, _why_ does it work on windows? > > > > > > So do I ;-) > > > > > > > Or what are we doing differently on Linux so that it doesn't work? If > > > > anybody has any idea on how we could dig into this and figure it out > > > > on this level, this would probably allow us to get closer to the root > > > > cause? no? > > > > > > Have you tried to use the acpi_rev_override parameter in your system and > > > does it have any effect? > > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that > > > should hopefully reveal something. > > > > > > > I think I did in the past and it seemed to have worked, there is just > > one big issue with this: it's a Dell specific workaround afaik, and > > this issue plagues not just Dell, but we've seen it on HP and Lenovo > > laptops as well, and I've heard about users having the same issues on > > Asus and MSI laptops as well. > > Maybe it is not a workaround at all but instead it simply determines > whether the system supports RTD3 or something like that (IIRC Windows 8 > started supporting it). Maybe Dell added check for Linux because at that > time Linux did not support it. > the point is, it's not checking it by default, so by default you still run into the windows 8 codepath. > In case RTD3 is supported it invokes LKDS() which probably does the L2 > or L3 entry and this is for some reason does not work the same way in > Linux than it does with Windows 8+. > > I don't remember if this happens only with nouveau or with the > proprietary driver as well but looking at the nouveau runtime PM suspend > hook (assuming I'm looking at the correct code): > > static int > nouveau_pmops_runtime_suspend(struct device *dev) > { > struct pci_dev *pdev = to_pci_dev(dev); > struct drm_device *drm_dev = pci_get_drvdata(pdev); > int ret; > > if (!nouveau_pmops_runtime()) { > pm_runtime_forbid(dev); > return -EBUSY; > } > > nouveau_switcheroo_optimus_dsm(); > ret = nouveau_do_suspend(drm_dev, true); > pci_save_state(pdev); > pci_disable_device(pdev); > pci_ignore_hotplug(pdev); > pci_set_power_state(pdev, PCI_D3cold); > drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF; > return ret; > } > > Normally PCI drivers leave the PCI bus PM things to PCI core but here > the driver does these. So I wonder if it makes any difference if we let > the core handle all that: > > static int > nouveau_pmops_runtime_suspend(struct device *dev) > { > struct pci_dev *pdev = to_pci_dev(dev); > struct drm_device *drm_dev = pci_get_drvdata(pdev); > int ret; > > if (!nouveau_pmops_runtime()) { > pm_runtime_forbid(dev); > return -EBUSY; > } > > nouveau_switcheroo_optimus_dsm(); > ret = nouveau_do_suspend(drm_dev, true); > pci_ignore_hotplug(pdev); > drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF; > return ret; > } > > and similar for the nouveau_pmops_runtime_resume(). > yeah, I tried that at some point and it didn't help either. The reason we call those from inside Nouveau is to support systems pre _PR where nouveau invokes custom _DSM calls on its own. We could potentially check for that though. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote: > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg > wrote: > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote: > > > overall, what I really want to know is, _why_ does it work on windows? > > > > So do I ;-) > > > > > Or what are we doing differently on Linux so that it doesn't work? If > > > anybody has any idea on how we could dig into this and figure it out > > > on this level, this would probably allow us to get closer to the root > > > cause? no? > > > > Have you tried to use the acpi_rev_override parameter in your system and > > does it have any effect? > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that > > should hopefully reveal something. > > > > I think I did in the past and it seemed to have worked, there is just > one big issue with this: it's a Dell specific workaround afaik, and > this issue plagues not just Dell, but we've seen it on HP and Lenovo > laptops as well, and I've heard about users having the same issues on > Asus and MSI laptops as well. Maybe it is not a workaround at all but instead it simply determines whether the system supports RTD3 or something like that (IIRC Windows 8 started supporting it). Maybe Dell added check for Linux because at that time Linux did not support it. In case RTD3 is supported it invokes LKDS() which probably does the L2 or L3 entry and this is for some reason does not work the same way in Linux than it does with Windows 8+. I don't remember if this happens only with nouveau or with the proprietary driver as well but looking at the nouveau runtime PM suspend hook (assuming I'm looking at the correct code): static int nouveau_pmops_runtime_suspend(struct device *dev) { struct pci_dev *pdev = to_pci_dev(dev); struct drm_device *drm_dev = pci_get_drvdata(pdev); int ret; if (!nouveau_pmops_runtime()) { pm_runtime_forbid(dev); return -EBUSY; } nouveau_switcheroo_optimus_dsm(); ret = nouveau_do_suspend(drm_dev, true); pci_save_state(pdev); pci_disable_device(pdev); pci_ignore_hotplug(pdev); pci_set_power_state(pdev, PCI_D3cold); drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF; return ret; } Normally PCI drivers leave the PCI bus PM things to PCI core but here the driver does these. So I wonder if it makes any difference if we let the core handle all that: static int nouveau_pmops_runtime_suspend(struct device *dev) { struct pci_dev *pdev = to_pci_dev(dev); struct drm_device *drm_dev = pci_get_drvdata(pdev); int ret; if (!nouveau_pmops_runtime()) { pm_runtime_forbid(dev); return -EBUSY; } nouveau_switcheroo_optimus_dsm(); ret = nouveau_do_suspend(drm_dev, true); pci_ignore_hotplug(pdev); drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF; return ret; } and similar for the nouveau_pmops_runtime_resume(). ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
It depends on the kernel being built with ACPI_REV_OVERRIDE_POSSIBLE=y and acpi_rev_override=1 being set on the kernel command line On Wed, Nov 20, 2019 at 1:15 PM Rafael J. Wysocki wrote: > > On Wed, Nov 20, 2019 at 1:10 PM Karol Herbst wrote: > > > > On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki wrote: > > > > > > On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst wrote: > > > > > > > > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki > > > > wrote: > > > > > > > > > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg > > > > > wrote: > > > > > > > > > > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote: > > > > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg > > > > > > > wrote: > > > > > > > > > > > > > > [cut] > > > > > > > > > > > > > > > > > Oh, so does it look like we are trying to work around AML that > > > > > > > tried > > > > > > > to work around some problematic behavior in Linux at one point? > > > > > > > > > > > > Yes, it looks like so if I read the ASL right. > > > > > > > > > > OK, so that would call for a DMI-based quirk as the real cause for the > > > > > issue seems to be the AML in question, which means a firmware problem. > > > > > > > > > > > > > And I disagree as this is a linux specific workaround and windows goes > > > > that path and succeeds. This firmware based workaround was added, > > > > because it broke on Linux. > > > > > > Apparently so at the time it was added, but would it still break after > > > the kernel changes made since then? > > > > > > Moreover, has it not become harmful now? IOW, wouldn't it work after > > > removing the "Linux workaround" from the AML? > > > > > > The only way to verify that I can see would be to run the system with > > > custom ACPI tables without the "Linux workaround" in the AML in > > > question. > > > > > > > the workaround is not enabled by default, because it has to be > > explicitly enabled by the user. > > I'm not sure what you are talking about. > > I'm taking specifically about the ((OSYS == 0x07DF) && (_REV == 0x05)) > check mentioned by Mika which doesn't seem to depend on user input in > any way. > ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 1:10 PM Karol Herbst wrote: > > On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki wrote: > > > > On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst wrote: > > > > > > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki > > > wrote: > > > > > > > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg > > > > wrote: > > > > > > > > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote: > > > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg > > > > > > wrote: > > > > > > > > > > > [cut] > > > > > > > > > > > > > > Oh, so does it look like we are trying to work around AML that tried > > > > > > to work around some problematic behavior in Linux at one point? > > > > > > > > > > Yes, it looks like so if I read the ASL right. > > > > > > > > OK, so that would call for a DMI-based quirk as the real cause for the > > > > issue seems to be the AML in question, which means a firmware problem. > > > > > > > > > > And I disagree as this is a linux specific workaround and windows goes > > > that path and succeeds. This firmware based workaround was added, > > > because it broke on Linux. > > > > Apparently so at the time it was added, but would it still break after > > the kernel changes made since then? > > > > Moreover, has it not become harmful now? IOW, wouldn't it work after > > removing the "Linux workaround" from the AML? > > > > The only way to verify that I can see would be to run the system with > > custom ACPI tables without the "Linux workaround" in the AML in > > question. > > > > the workaround is not enabled by default, because it has to be > explicitly enabled by the user. I'm not sure what you are talking about. I'm taking specifically about the ((OSYS == 0x07DF) && (_REV == 0x05)) check mentioned by Mika which doesn't seem to depend on user input in any way. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg wrote: > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote: > > overall, what I really want to know is, _why_ does it work on windows? > > So do I ;-) > > > Or what are we doing differently on Linux so that it doesn't work? If > > anybody has any idea on how we could dig into this and figure it out > > on this level, this would probably allow us to get closer to the root > > cause? no? > > Have you tried to use the acpi_rev_override parameter in your system and > does it have any effect? > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that > should hopefully reveal something. > I think I did in the past and it seemed to have worked, there is just one big issue with this: it's a Dell specific workaround afaik, and this issue plagues not just Dell, but we've seen it on HP and Lenovo laptops as well, and I've heard about users having the same issues on Asus and MSI laptops as well. I will spend some time to collect all the necessary information, create a bug to put it all in there and send out a v5 with the updated information and references to this bug. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki wrote: > > On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst wrote: > > > > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki > > wrote: > > > > > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg > > > wrote: > > > > > > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote: > > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg > > > > > wrote: > > > > > > > > [cut] > > > > > > > > > > > Oh, so does it look like we are trying to work around AML that tried > > > > > to work around some problematic behavior in Linux at one point? > > > > > > > > Yes, it looks like so if I read the ASL right. > > > > > > OK, so that would call for a DMI-based quirk as the real cause for the > > > issue seems to be the AML in question, which means a firmware problem. > > > > > > > And I disagree as this is a linux specific workaround and windows goes > > that path and succeeds. This firmware based workaround was added, > > because it broke on Linux. > > Apparently so at the time it was added, but would it still break after > the kernel changes made since then? > > Moreover, has it not become harmful now? IOW, wouldn't it work after > removing the "Linux workaround" from the AML? > > The only way to verify that I can see would be to run the system with > custom ACPI tables without the "Linux workaround" in the AML in > question. Or running it with acpi_rev_override as suggested by Mika, which effectively would be the same thing. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 1:06 PM Rafael J. Wysocki wrote: > > On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst wrote: > > > > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki > > wrote: > > > > > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg > > > wrote: > > > > > > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote: > > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg > > > > > wrote: > > > > > > > > [cut] > > > > > > > > > > > Oh, so does it look like we are trying to work around AML that tried > > > > > to work around some problematic behavior in Linux at one point? > > > > > > > > Yes, it looks like so if I read the ASL right. > > > > > > OK, so that would call for a DMI-based quirk as the real cause for the > > > issue seems to be the AML in question, which means a firmware problem. > > > > > > > And I disagree as this is a linux specific workaround and windows goes > > that path and succeeds. This firmware based workaround was added, > > because it broke on Linux. > > Apparently so at the time it was added, but would it still break after > the kernel changes made since then? > > Moreover, has it not become harmful now? IOW, wouldn't it work after > removing the "Linux workaround" from the AML? > > The only way to verify that I can see would be to run the system with > custom ACPI tables without the "Linux workaround" in the AML in > question. > the workaround is not enabled by default, because it has to be explicitly enabled by the user. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote: > overall, what I really want to know is, _why_ does it work on windows? So do I ;-) > Or what are we doing differently on Linux so that it doesn't work? If > anybody has any idea on how we could dig into this and figure it out > on this level, this would probably allow us to get closer to the root > cause? no? Have you tried to use the acpi_rev_override parameter in your system and does it have any effect? Also did you try to trace the ACPI _ON/_OFF() methods? I think that should hopefully reveal something. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 12:51 PM Karol Herbst wrote: > > On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki wrote: > > > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg > > wrote: > > > > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote: > > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg > > > > wrote: > > > > > [cut] > > > > > > > > Oh, so does it look like we are trying to work around AML that tried > > > > to work around some problematic behavior in Linux at one point? > > > > > > Yes, it looks like so if I read the ASL right. > > > > OK, so that would call for a DMI-based quirk as the real cause for the > > issue seems to be the AML in question, which means a firmware problem. > > > > And I disagree as this is a linux specific workaround and windows goes > that path and succeeds. This firmware based workaround was added, > because it broke on Linux. Apparently so at the time it was added, but would it still break after the kernel changes made since then? Moreover, has it not become harmful now? IOW, wouldn't it work after removing the "Linux workaround" from the AML? The only way to verify that I can see would be to run the system with custom ACPI tables without the "Linux workaround" in the AML in question. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
overall, what I really want to know is, _why_ does it work on windows? Or what are we doing differently on Linux so that it doesn't work? If anybody has any idea on how we could dig into this and figure it out on this level, this would probably allow us to get closer to the root cause? no? On Wed, Nov 20, 2019 at 12:54 PM Karol Herbst wrote: > > for newer Windows the firmware uses bit 0x80 on 0x248 (Q0L2 being the > field name) on the bridge controller to turn of the device, on other > versions it uses the "older"? 0xb0 register and the P0LD field, which > is documented, where the former is not. > > On Wed, Nov 20, 2019 at 12:51 PM Mika Westerberg > wrote: > > > > On Wed, Nov 20, 2019 at 01:22:16PM +0200, Mika Westerberg wrote: > > > If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == > > > 0x05 > > > { > > > > The OSYS comes from this (in DSDT): > > > > If (_OSI ("Windows 2009")) > > { > > OSYS = 0x07D9 > > } > > > > If (_OSI ("Windows 2012")) > > { > > OSYS = 0x07DC > > } > > > > If (_OSI ("Windows 2013")) > > { > > OSYS = 0x07DD > > } > > > > If (_OSI ("Windows 2015")) > > { > > OSYS = 0x07DF > > } > > > > So I guess this particular check tries to identify Windows 7 and older, > > and Linux. > > > > > If ((PIOF == Zero)) > > > { > > > P0LD = One > > > TCNT = Zero > > > While ((TCNT < LDLY)) > > > { > > > If ((P0LT == 0x08)) > > > { > > > Break > > > } > > > > > > Sleep (0x10) > > > TCNT += 0x10 > > > } > > > > > > P0RM = One > > > P0AP = 0x03 > > > } > > > ElseIf ((PIOF == One)) > > > { > > > P1LD = One > > > TCNT = Zero > > > While ((TCNT < LDLY)) > > > { > > > If ((P1LT == 0x08)) > > > { > > > Break > > > } > > > > > > Sleep (0x10) > > > TCNT += 0x10 > > > } > > > > > > P1RM = One > > > P1AP = 0x03 > > > } > > > ElseIf ((PIOF == 0x02)) > > > { > > > P2LD = One > > > TCNT = Zero > > > While ((TCNT < LDLY)) > > > { > > > If ((P2LT == 0x08)) > > > { > > > Break > > > } > > > > > > Sleep (0x10) > > > TCNT += 0x10 > > > } > > > > > > P2RM = One > > > P2AP = 0x03 > > > } > > > > > > If ((PBGE != Zero)) > > > { > > > If (SBDL (PIOF)) > > > { > > > MBDL = GMXB (PIOF) > > > PDUB (PIOF, MBDL) > > > } > > > } > > > } > > > Else > > > { > > > LKDS (PIOF) > > > } > > > > > > If ((DerefOf (SCLK [Zero]) != Zero)) > > > { > > > PCRO (0xDC, 0x100C, DerefOf (SCLK [One])) > > > Sleep (0x10) > > > } > > > > > > GPPR (PIOF, Zero) > > > If ((OSYS != 0x07D9)) > > > { > > > DIWK (PIOF) > > > } > > > > > > \_SB.SGOV (0x01010004, Zero) > > > Sleep (0x14) > > > Return (Zero) > > > } > > ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
for newer Windows the firmware uses bit 0x80 on 0x248 (Q0L2 being the field name) on the bridge controller to turn of the device, on other versions it uses the "older"? 0xb0 register and the P0LD field, which is documented, where the former is not. On Wed, Nov 20, 2019 at 12:51 PM Mika Westerberg wrote: > > On Wed, Nov 20, 2019 at 01:22:16PM +0200, Mika Westerberg wrote: > > If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == > > 0x05 > > { > > The OSYS comes from this (in DSDT): > > If (_OSI ("Windows 2009")) > { > OSYS = 0x07D9 > } > > If (_OSI ("Windows 2012")) > { > OSYS = 0x07DC > } > > If (_OSI ("Windows 2013")) > { > OSYS = 0x07DD > } > > If (_OSI ("Windows 2015")) > { > OSYS = 0x07DF > } > > So I guess this particular check tries to identify Windows 7 and older, > and Linux. > > > If ((PIOF == Zero)) > > { > > P0LD = One > > TCNT = Zero > > While ((TCNT < LDLY)) > > { > > If ((P0LT == 0x08)) > > { > > Break > > } > > > > Sleep (0x10) > > TCNT += 0x10 > > } > > > > P0RM = One > > P0AP = 0x03 > > } > > ElseIf ((PIOF == One)) > > { > > P1LD = One > > TCNT = Zero > > While ((TCNT < LDLY)) > > { > > If ((P1LT == 0x08)) > > { > > Break > > } > > > > Sleep (0x10) > > TCNT += 0x10 > > } > > > > P1RM = One > > P1AP = 0x03 > > } > > ElseIf ((PIOF == 0x02)) > > { > > P2LD = One > > TCNT = Zero > > While ((TCNT < LDLY)) > > { > > If ((P2LT == 0x08)) > > { > > Break > > } > > > > Sleep (0x10) > > TCNT += 0x10 > > } > > > > P2RM = One > > P2AP = 0x03 > > } > > > > If ((PBGE != Zero)) > > { > > If (SBDL (PIOF)) > > { > > MBDL = GMXB (PIOF) > > PDUB (PIOF, MBDL) > > } > > } > > } > > Else > > { > > LKDS (PIOF) > > } > > > > If ((DerefOf (SCLK [Zero]) != Zero)) > > { > > PCRO (0xDC, 0x100C, DerefOf (SCLK [One])) > > Sleep (0x10) > > } > > > > GPPR (PIOF, Zero) > > If ((OSYS != 0x07D9)) > > { > > DIWK (PIOF) > > } > > > > \_SB.SGOV (0x01010004, Zero) > > Sleep (0x14) > > Return (Zero) > > } > ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 12:48 PM Rafael J. Wysocki wrote: > > On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg > wrote: > > > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote: > > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg > > > wrote: > > > > > > > > Hi Karol, > > > > > > > > On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote: > > > > > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas > > > > > wrote: > > > > > > > > > > > > [+cc Dave] > > > > > > > > > > > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote: > > > > > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into > > > > > > > higher device > > > > > > > states. > > > > > > > > > > > > > > v2: convert to pci_dev quirk > > > > > > > put a proper technical explanation of the issue as a in-code > > > > > > > comment > > > > > > > v3: disable it only for certain combinations of intel and nvidia > > > > > > > hardware > > > > > > > v4: simplify quirk by setting flag on the GPU itself > > > > > > > > > > > > I have zero confidence that we understand the real problem, but we > > > > > > do > > > > > > need to do something with this. I'll merge it for v5.5 if we get > > > > > > the > > > > > > minor procedural stuff below straightened out. > > > > > > > > > > > > > > > > Thanks, and I agree with your statement, but at this point I think > > > > > only Intel can help out digging deeper as I see no way to debug this > > > > > further. > > > > > > > > I don't have anything against this patch, as long as the quirk stays > > > > limited to the particular root port leading to the NVIDIA GPU. The > > > > reason why I think it should to be limited is that I'm pretty certain > > > > the problem is not in the root port itself. I have here a KBL based > > > > Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold > > > > (it is connected to PCH root port) and it wakes up there just fine, so > > > > don't want to break that. > > > > > > > > Now, PCIe devices cannot go into D3cold all by themselves. They always > > > > need help from the platform side which is ACPI in this case. This is > > > > done by having the device to have _PR3 method that returns one or more > > > > power resources that the OS is supposed to turn off when the device is > > > > put into D3cold. All of that is implemented as form of ACPI methods that > > > > pretty much do the hardware specific things that are outside of PCIe > > > > spec to get the device into D3cold. At high level the _OFF() method > > > > causes the root port to broadcast PME_Turn_Off message that results the > > > > link to enter L2/3 ready, it then asserts PERST, configures WAKE (both > > > > can be GPIOs) and finally removes power (if the link goes into L3, > > > > otherwise it goes into L2). > > > > > > > > I think this is where the problem actually lies - the ASL methods that > > > > are used to put the device into D3cold and back. We know that in Windows > > > > this all works fine so unless Windows quirks the root port the same way > > > > there is another reason behind this. > > > > > > > > In case of Dell XPS 9560 (IIRC that's the machine you have) the > > > > corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its > > > > _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods > > > > itself do lots of things and it is hard to follow the dissassembled > > > > ASL which does not have any comments but there are couple of things that > > > > stand out where we may go into a different path. One of them is this in > > > > the PGOF() method: > > > > > > > >If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05 > > > > > > > > The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux > > > > (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override > > > > _REV")) so it might be that Dell people tested this at some point in > > > > Linux as well. Added Mario in case he has any ideas. > > > > > > > > Previously I suggested you to try the ACPI method tracing to see what > > > > happens inside PGOF(). Did you have time to try it? It may provide more > > > > information about that is happening inside those methods and hopefully > > > > point us to the root cause. > > > > > > > > Also if you haven't tried already passing acpi_rev_override in the > > > > command line makes the _REV to return 5 so it should go into the "Linux" > > > > path in PGOF(). > > > > > > Oh, so does it look like we are trying to work around AML that tried > > > to work around some problematic behavior in Linux at one point? > > > > Yes, it looks like so if I read the ASL right. > > OK, so that would call for a DMI-based quirk as the real cause for the > issue seems to be the AML in question, which means a firmware problem. > And I disagree as this is a linux specific workaround and windows goes that path and succeeds. This firmware based workaround was added, because it broke on Linux. _
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 01:22:16PM +0200, Mika Westerberg wrote: > If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == > 0x05 > { The OSYS comes from this (in DSDT): If (_OSI ("Windows 2009")) { OSYS = 0x07D9 } If (_OSI ("Windows 2012")) { OSYS = 0x07DC } If (_OSI ("Windows 2013")) { OSYS = 0x07DD } If (_OSI ("Windows 2015")) { OSYS = 0x07DF } So I guess this particular check tries to identify Windows 7 and older, and Linux. > If ((PIOF == Zero)) > { > P0LD = One > TCNT = Zero > While ((TCNT < LDLY)) > { > If ((P0LT == 0x08)) > { > Break > } > > Sleep (0x10) > TCNT += 0x10 > } > > P0RM = One > P0AP = 0x03 > } > ElseIf ((PIOF == One)) > { > P1LD = One > TCNT = Zero > While ((TCNT < LDLY)) > { > If ((P1LT == 0x08)) > { > Break > } > > Sleep (0x10) > TCNT += 0x10 > } > > P1RM = One > P1AP = 0x03 > } > ElseIf ((PIOF == 0x02)) > { > P2LD = One > TCNT = Zero > While ((TCNT < LDLY)) > { > If ((P2LT == 0x08)) > { > Break > } > > Sleep (0x10) > TCNT += 0x10 > } > > P2RM = One > P2AP = 0x03 > } > > If ((PBGE != Zero)) > { > If (SBDL (PIOF)) > { > MBDL = GMXB (PIOF) > PDUB (PIOF, MBDL) > } > } > } > Else > { > LKDS (PIOF) > } > > If ((DerefOf (SCLK [Zero]) != Zero)) > { > PCRO (0xDC, 0x100C, DerefOf (SCLK [One])) > Sleep (0x10) > } > > GPPR (PIOF, Zero) > If ((OSYS != 0x07D9)) > { > DIWK (PIOF) > } > > \_SB.SGOV (0x01010004, Zero) > Sleep (0x14) > Return (Zero) > } ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 12:22 PM Mika Westerberg wrote: > > On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote: > > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg > > wrote: > > > > > > Hi Karol, > > > > > > On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote: > > > > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas > > > > wrote: > > > > > > > > > > [+cc Dave] > > > > > > > > > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote: > > > > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into > > > > > > higher device > > > > > > states. > > > > > > > > > > > > v2: convert to pci_dev quirk > > > > > > put a proper technical explanation of the issue as a in-code > > > > > > comment > > > > > > v3: disable it only for certain combinations of intel and nvidia > > > > > > hardware > > > > > > v4: simplify quirk by setting flag on the GPU itself > > > > > > > > > > I have zero confidence that we understand the real problem, but we do > > > > > need to do something with this. I'll merge it for v5.5 if we get the > > > > > minor procedural stuff below straightened out. > > > > > > > > > > > > > Thanks, and I agree with your statement, but at this point I think > > > > only Intel can help out digging deeper as I see no way to debug this > > > > further. > > > > > > I don't have anything against this patch, as long as the quirk stays > > > limited to the particular root port leading to the NVIDIA GPU. The > > > reason why I think it should to be limited is that I'm pretty certain > > > the problem is not in the root port itself. I have here a KBL based > > > Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold > > > (it is connected to PCH root port) and it wakes up there just fine, so > > > don't want to break that. > > > > > > Now, PCIe devices cannot go into D3cold all by themselves. They always > > > need help from the platform side which is ACPI in this case. This is > > > done by having the device to have _PR3 method that returns one or more > > > power resources that the OS is supposed to turn off when the device is > > > put into D3cold. All of that is implemented as form of ACPI methods that > > > pretty much do the hardware specific things that are outside of PCIe > > > spec to get the device into D3cold. At high level the _OFF() method > > > causes the root port to broadcast PME_Turn_Off message that results the > > > link to enter L2/3 ready, it then asserts PERST, configures WAKE (both > > > can be GPIOs) and finally removes power (if the link goes into L3, > > > otherwise it goes into L2). > > > > > > I think this is where the problem actually lies - the ASL methods that > > > are used to put the device into D3cold and back. We know that in Windows > > > this all works fine so unless Windows quirks the root port the same way > > > there is another reason behind this. > > > > > > In case of Dell XPS 9560 (IIRC that's the machine you have) the > > > corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its > > > _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods > > > itself do lots of things and it is hard to follow the dissassembled > > > ASL which does not have any comments but there are couple of things that > > > stand out where we may go into a different path. One of them is this in > > > the PGOF() method: > > > > > >If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05 > > > > > > The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux > > > (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override > > > _REV")) so it might be that Dell people tested this at some point in > > > Linux as well. Added Mario in case he has any ideas. > > > > > > Previously I suggested you to try the ACPI method tracing to see what > > > happens inside PGOF(). Did you have time to try it? It may provide more > > > information about that is happening inside those methods and hopefully > > > point us to the root cause. > > > > > > Also if you haven't tried already passing acpi_rev_override in the > > > command line makes the _REV to return 5 so it should go into the "Linux" > > > path in PGOF(). > > > > Oh, so does it look like we are trying to work around AML that tried > > to work around some problematic behavior in Linux at one point? > > Yes, it looks like so if I read the ASL right. OK, so that would call for a DMI-based quirk as the real cause for the issue seems to be the AML in question, which means a firmware problem. ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 11:52:22AM +0100, Rafael J. Wysocki wrote: > On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg > wrote: > > > > Hi Karol, > > > > On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote: > > > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas wrote: > > > > > > > > [+cc Dave] > > > > > > > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote: > > > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher > > > > > device > > > > > states. > > > > > > > > > > v2: convert to pci_dev quirk > > > > > put a proper technical explanation of the issue as a in-code > > > > > comment > > > > > v3: disable it only for certain combinations of intel and nvidia > > > > > hardware > > > > > v4: simplify quirk by setting flag on the GPU itself > > > > > > > > I have zero confidence that we understand the real problem, but we do > > > > need to do something with this. I'll merge it for v5.5 if we get the > > > > minor procedural stuff below straightened out. > > > > > > > > > > Thanks, and I agree with your statement, but at this point I think > > > only Intel can help out digging deeper as I see no way to debug this > > > further. > > > > I don't have anything against this patch, as long as the quirk stays > > limited to the particular root port leading to the NVIDIA GPU. The > > reason why I think it should to be limited is that I'm pretty certain > > the problem is not in the root port itself. I have here a KBL based > > Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold > > (it is connected to PCH root port) and it wakes up there just fine, so > > don't want to break that. > > > > Now, PCIe devices cannot go into D3cold all by themselves. They always > > need help from the platform side which is ACPI in this case. This is > > done by having the device to have _PR3 method that returns one or more > > power resources that the OS is supposed to turn off when the device is > > put into D3cold. All of that is implemented as form of ACPI methods that > > pretty much do the hardware specific things that are outside of PCIe > > spec to get the device into D3cold. At high level the _OFF() method > > causes the root port to broadcast PME_Turn_Off message that results the > > link to enter L2/3 ready, it then asserts PERST, configures WAKE (both > > can be GPIOs) and finally removes power (if the link goes into L3, > > otherwise it goes into L2). > > > > I think this is where the problem actually lies - the ASL methods that > > are used to put the device into D3cold and back. We know that in Windows > > this all works fine so unless Windows quirks the root port the same way > > there is another reason behind this. > > > > In case of Dell XPS 9560 (IIRC that's the machine you have) the > > corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its > > _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods > > itself do lots of things and it is hard to follow the dissassembled > > ASL which does not have any comments but there are couple of things that > > stand out where we may go into a different path. One of them is this in > > the PGOF() method: > > > >If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05 > > > > The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux > > (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override > > _REV")) so it might be that Dell people tested this at some point in > > Linux as well. Added Mario in case he has any ideas. > > > > Previously I suggested you to try the ACPI method tracing to see what > > happens inside PGOF(). Did you have time to try it? It may provide more > > information about that is happening inside those methods and hopefully > > point us to the root cause. > > > > Also if you haven't tried already passing acpi_rev_override in the > > command line makes the _REV to return 5 so it should go into the "Linux" > > path in PGOF(). > > Oh, so does it look like we are trying to work around AML that tried > to work around some problematic behavior in Linux at one point? Yes, it looks like so if I read the ASL right. The whole method looks like below (the full acpidump was shared by Karol in v3 thread) and there is similar check in the _ON (PGON) method: Method (PGOF, 1, Serialized) { PIOF = Arg0 If ((PIOF == Zero)) { If ((SGGP == Zero)) { Return (Zero) } } ElseIf ((PIOF == One)) { If ((P1GP == Zero)) { Return (Zero) } } ElseIf ((PIOF == 0x02)) { If ((P2GP == Zero)) { Return (Zero) } } PEBA = \XBAS /* External reference */ PDEV = GDEV (PIOF) PFUN = GFUN (PIOF)
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 11:18 AM Mika Westerberg wrote: > > Hi Karol, > > On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote: > > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas wrote: > > > > > > [+cc Dave] > > > > > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote: > > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher > > > > device > > > > states. > > > > > > > > v2: convert to pci_dev quirk > > > > put a proper technical explanation of the issue as a in-code comment > > > > v3: disable it only for certain combinations of intel and nvidia > > > > hardware > > > > v4: simplify quirk by setting flag on the GPU itself > > > > > > I have zero confidence that we understand the real problem, but we do > > > need to do something with this. I'll merge it for v5.5 if we get the > > > minor procedural stuff below straightened out. > > > > > > > Thanks, and I agree with your statement, but at this point I think > > only Intel can help out digging deeper as I see no way to debug this > > further. > > I don't have anything against this patch, as long as the quirk stays > limited to the particular root port leading to the NVIDIA GPU. The > reason why I think it should to be limited is that I'm pretty certain > the problem is not in the root port itself. I have here a KBL based > Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold > (it is connected to PCH root port) and it wakes up there just fine, so > don't want to break that. > > Now, PCIe devices cannot go into D3cold all by themselves. They always > need help from the platform side which is ACPI in this case. This is > done by having the device to have _PR3 method that returns one or more > power resources that the OS is supposed to turn off when the device is > put into D3cold. All of that is implemented as form of ACPI methods that > pretty much do the hardware specific things that are outside of PCIe > spec to get the device into D3cold. At high level the _OFF() method > causes the root port to broadcast PME_Turn_Off message that results the > link to enter L2/3 ready, it then asserts PERST, configures WAKE (both > can be GPIOs) and finally removes power (if the link goes into L3, > otherwise it goes into L2). > > I think this is where the problem actually lies - the ASL methods that > are used to put the device into D3cold and back. We know that in Windows > this all works fine so unless Windows quirks the root port the same way > there is another reason behind this. > > In case of Dell XPS 9560 (IIRC that's the machine you have) the > corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its > _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods > itself do lots of things and it is hard to follow the dissassembled > ASL which does not have any comments but there are couple of things that > stand out where we may go into a different path. One of them is this in > the PGOF() method: > >If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05 > > The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux > (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override > _REV")) so it might be that Dell people tested this at some point in > Linux as well. Added Mario in case he has any ideas. > > Previously I suggested you to try the ACPI method tracing to see what > happens inside PGOF(). Did you have time to try it? It may provide more > information about that is happening inside those methods and hopefully > point us to the root cause. > > Also if you haven't tried already passing acpi_rev_override in the > command line makes the _REV to return 5 so it should go into the "Linux" > path in PGOF(). Oh, so does it look like we are trying to work around AML that tried to work around some problematic behavior in Linux at one point? > [1] > https://www.kernel.org/doc/html/latest/firmware-guide/acpi/osi.html#do-not-use-rev ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
Hi Karol, On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote: > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas wrote: > > > > [+cc Dave] > > > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote: > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher > > > device > > > states. > > > > > > v2: convert to pci_dev quirk > > > put a proper technical explanation of the issue as a in-code comment > > > v3: disable it only for certain combinations of intel and nvidia hardware > > > v4: simplify quirk by setting flag on the GPU itself > > > > I have zero confidence that we understand the real problem, but we do > > need to do something with this. I'll merge it for v5.5 if we get the > > minor procedural stuff below straightened out. > > > > Thanks, and I agree with your statement, but at this point I think > only Intel can help out digging deeper as I see no way to debug this > further. I don't have anything against this patch, as long as the quirk stays limited to the particular root port leading to the NVIDIA GPU. The reason why I think it should to be limited is that I'm pretty certain the problem is not in the root port itself. I have here a KBL based Thinkpad X1 Carbon 6th gen that can put the TBT controller into D3cold (it is connected to PCH root port) and it wakes up there just fine, so don't want to break that. Now, PCIe devices cannot go into D3cold all by themselves. They always need help from the platform side which is ACPI in this case. This is done by having the device to have _PR3 method that returns one or more power resources that the OS is supposed to turn off when the device is put into D3cold. All of that is implemented as form of ACPI methods that pretty much do the hardware specific things that are outside of PCIe spec to get the device into D3cold. At high level the _OFF() method causes the root port to broadcast PME_Turn_Off message that results the link to enter L2/3 ready, it then asserts PERST, configures WAKE (both can be GPIOs) and finally removes power (if the link goes into L3, otherwise it goes into L2). I think this is where the problem actually lies - the ASL methods that are used to put the device into D3cold and back. We know that in Windows this all works fine so unless Windows quirks the root port the same way there is another reason behind this. In case of Dell XPS 9560 (IIRC that's the machine you have) the corresponding power resource is called \_SB.PCI0.PEG0.PG00 and its _ON/_OFF methods end up calling PGON()/PGOF() accordingly. The methods itself do lots of things and it is hard to follow the dissassembled ASL which does not have any comments but there are couple of things that stand out where we may go into a different path. One of them is this in the PGOF() method: If (((OSYS <= 0x07D9) || ((OSYS == 0x07DF) && (_REV == 0x05 The ((OSYS == 0x07DF) && (_REV == 0x05)) checks specifically for Linux (see [1] and 18d78b64fddc ("ACPI / init: Make it possible to override _REV")) so it might be that Dell people tested this at some point in Linux as well. Added Mario in case he has any ideas. Previously I suggested you to try the ACPI method tracing to see what happens inside PGOF(). Did you have time to try it? It may provide more information about that is happening inside those methods and hopefully point us to the root cause. Also if you haven't tried already passing acpi_rev_override in the command line makes the _REV to return 5 so it should go into the "Linux" path in PGOF(). [1] https://www.kernel.org/doc/html/latest/firmware-guide/acpi/osi.html#do-not-use-rev > > > Signed-off-by: Karol Herbst > > > Cc: Bjorn Helgaas > > > Cc: Lyude Paul > > > Cc: Rafael J. Wysocki > > > Cc: Mika Westerberg > > > Cc: linux-...@vger.kernel.org > > > Cc: linux...@vger.kernel.org > > > Cc: dri-devel@lists.freedesktop.org > > > Cc: nouv...@lists.freedesktop.org > > > --- > > > drivers/pci/pci.c| 7 ++ > > > drivers/pci/quirks.c | 53 > > > include/linux/pci.h | 1 + > > > 3 files changed, 61 insertions(+) > > > > > > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c > > > index b97d9e10c9cc..02e71e0bcdd7 100644 > > > --- a/drivers/pci/pci.c > > > +++ b/drivers/pci/pci.c > > > @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev > > > *dev, pci_power_t state) > > > || (state == PCI_D2 && !dev->d2_support)) > > > return -EIO; > > > > > > + /* > > > + * check if we have a bad combination of bridge controller and > > > nvidia > > > + * GPU, see quirk_broken_nv_runpm for more info > > > > Whitespace damage. Capitalized incorrectly (see other comments > > nearby). > > > > > + */ > > > + if (state != PCI_D0 && dev->broken_nv_runpm) > > > + return 0; > > > + > > > pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); > > > > > > /* > > > diff --git a/drivers/pci/quirks.c b
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
[+cc Daniel, Vidya, Thierry; thread starts at https://lore.kernel.org/r/20191017121901.13699-1-kher...@redhat.com] On Tue, Nov 19, 2019 at 11:26:45PM +0100, Karol Herbst wrote: > On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas wrote: > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote: > > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into > > > higher device states. > > > ... > > > + * - kernel crashes, as all pci reads return -1, which most code > > > + *isn't able to handle well enough. > > > > More details about these crashes would be useful as we look at > > places that *should* be able to handle errors like this. > > makes sense, I could ,orthogonal to this, make the code more robust > if we hit issues like this in the future. What I am mostly wondering > about is, why pci core doesn't give up if the device doesn't come > back from D3cold? It sounds like, that the most sane thing to do > here is to just give up and fail runtime_resume and report errors > back to userspace trying to make use of the devices. It's possible there's something the core could do here. It's interesting that we have at least three issues in this area in this release: 1) This NVIDIA GPU issue 2) Daniel's issue with AMD Ryzen5/7 XHCI power-on (https://lore.kernel.org/r/20191014061355.29072-1-dr...@endlessm.com) 3) Vidya's issue with Intel 750 NVMe power-on (https://lore.kernel.org/r/20191118172310.21373-1-vid...@nvidia.com) Vidya's current patch is the most generic (calling pci_dev_wait() from pci_power_up()). That will print a warning if the device doesn't respond, but we still don't go as far as returning errors to runtime_resume. This is definitely something we should consider improving in the future. > > > + * - sudden shutdowns, as the kernel identified an unrecoverable error > > > after > > > + *userspace tries to access the GPU. > > > > This doesn't fit with the others and more details might be > > informative here as well. > > yeah.. I try to get more infos on that. But at least for me (and it > might be a distribution thing) if I execute lspci, the system shuts > down, or at least tries to and might fail. The lspci doesn't need to be after the failure occurs. You can do it immediately after boot. If you can capture any part of the dmesg or console log of these sudden shutdowns, that's all I'm interested in at this point. Bjorn ___ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Tue, Nov 19, 2019 at 10:50 PM Bjorn Helgaas wrote: > > [+cc Dave] > > On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote: > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device > > states. > > > > v2: convert to pci_dev quirk > > put a proper technical explanation of the issue as a in-code comment > > v3: disable it only for certain combinations of intel and nvidia hardware > > v4: simplify quirk by setting flag on the GPU itself > > I have zero confidence that we understand the real problem, but we do > need to do something with this. I'll merge it for v5.5 if we get the > minor procedural stuff below straightened out. > Thanks, and I agree with your statement, but at this point I think only Intel can help out digging deeper as I see no way to debug this further. > > Signed-off-by: Karol Herbst > > Cc: Bjorn Helgaas > > Cc: Lyude Paul > > Cc: Rafael J. Wysocki > > Cc: Mika Westerberg > > Cc: linux-...@vger.kernel.org > > Cc: linux...@vger.kernel.org > > Cc: dri-devel@lists.freedesktop.org > > Cc: nouv...@lists.freedesktop.org > > --- > > drivers/pci/pci.c| 7 ++ > > drivers/pci/quirks.c | 53 > > include/linux/pci.h | 1 + > > 3 files changed, 61 insertions(+) > > > > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c > > index b97d9e10c9cc..02e71e0bcdd7 100644 > > --- a/drivers/pci/pci.c > > +++ b/drivers/pci/pci.c > > @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev > > *dev, pci_power_t state) > > || (state == PCI_D2 && !dev->d2_support)) > > return -EIO; > > > > + /* > > + * check if we have a bad combination of bridge controller and nvidia > > + * GPU, see quirk_broken_nv_runpm for more info > > Whitespace damage. Capitalized incorrectly (see other comments > nearby). > > > + */ > > + if (state != PCI_D0 && dev->broken_nv_runpm) > > + return 0; > > + > > pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); > > > > /* > > diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c > > index 44c4ae1abd00..0006c9e37b6f 100644 > > --- a/drivers/pci/quirks.c > > +++ b/drivers/pci/quirks.c > > @@ -5268,3 +5268,56 @@ static void > > quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev) > > DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1, > > PCI_CLASS_DISPLAY_VGA, 8, > > quirk_reset_lenovo_thinkpad_p50_nvgpu); > > + > > +/* > > + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus > > after > > + * those were put into D3cold state if they were put into a non D0 PCI PM > > + * device state before doing so. > > A device in D3cold is off the bus by definition. > > IIUC the problem is that the sequence D0 -> D3hot -> D3cold -> D0 for > the GPU fails in the transition back to D0, while D0 -> D3cold -> D0 > works fine. > > So I guess the problem is that we can put the device in D3cold with no > problem, but if we put in D3hot before going to D3cold, the device > never comes back to D0. Right? > correct. It By the way, it doesn't matter if I put the device into D1 instead, as long as the device is not in D0 state before putting it into D3cold, it fails. > > + * This leads to various issue different issues which all manifest > > differently, > > s/issue different// > > Actually, I think there's only one underlying issue with several > manifestations. > > > + * but have the same root cause: > > + * - AIML code execution hits an infinite loop (as the coe waits on device > > + *memory to change). > > s/AIML/AML/ > s/coe/code/ > > > + * - kernel crashes, as all pci reads return -1, which most code isn't > > able > > + *to handle well enough. > > s/pci/PCI/ > > More details about these crashes would be useful as we look at places > that *should* be able to handle errors like this. > makes sense, I could ,orthogonal to this, make the code more robust if we hit issues like this in the future. What I am mostly wondering about is, why pci core doesn't give up if the device doesn't come back from D3cold? It sounds like, that the most sane thing to do here is to just give up and fail runtime_resume and report errors back to userspace trying to make use of the devices. > > + * - sudden shutdowns, as the kernel identified an unrecoverable error > > after > > + *userspace tries to access the GPU. > > This doesn't fit with the others and more details might be > informative here as well. > yeah.. I try to get more infos on that. But at least for me (and it might be a distribution thing) if I execute lspci, the system shuts down, or at least tries to and might fail. > > + * In all cases dmesg will contain at least one line like this: > > + * 'nouveau :01:00.0: Refused to change power state, currently in D3' > > + * followed by a lot of nouveau timeouts. > > + * > > + * ACPI code writes bit 0x80 to the
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
[+cc Dave] On Thu, Oct 17, 2019 at 02:19:01PM +0200, Karol Herbst wrote: > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device > states. > > v2: convert to pci_dev quirk > put a proper technical explanation of the issue as a in-code comment > v3: disable it only for certain combinations of intel and nvidia hardware > v4: simplify quirk by setting flag on the GPU itself I have zero confidence that we understand the real problem, but we do need to do something with this. I'll merge it for v5.5 if we get the minor procedural stuff below straightened out. > Signed-off-by: Karol Herbst > Cc: Bjorn Helgaas > Cc: Lyude Paul > Cc: Rafael J. Wysocki > Cc: Mika Westerberg > Cc: linux-...@vger.kernel.org > Cc: linux...@vger.kernel.org > Cc: dri-devel@lists.freedesktop.org > Cc: nouv...@lists.freedesktop.org > --- > drivers/pci/pci.c| 7 ++ > drivers/pci/quirks.c | 53 > include/linux/pci.h | 1 + > 3 files changed, 61 insertions(+) > > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c > index b97d9e10c9cc..02e71e0bcdd7 100644 > --- a/drivers/pci/pci.c > +++ b/drivers/pci/pci.c > @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, > pci_power_t state) > || (state == PCI_D2 && !dev->d2_support)) > return -EIO; > > + /* > + * check if we have a bad combination of bridge controller and nvidia > + * GPU, see quirk_broken_nv_runpm for more info Whitespace damage. Capitalized incorrectly (see other comments nearby). > + */ > + if (state != PCI_D0 && dev->broken_nv_runpm) > + return 0; > + > pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); > > /* > diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c > index 44c4ae1abd00..0006c9e37b6f 100644 > --- a/drivers/pci/quirks.c > +++ b/drivers/pci/quirks.c > @@ -5268,3 +5268,56 @@ static void > quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev) > DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1, > PCI_CLASS_DISPLAY_VGA, 8, > quirk_reset_lenovo_thinkpad_p50_nvgpu); > + > +/* > + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after > + * those were put into D3cold state if they were put into a non D0 PCI PM > + * device state before doing so. A device in D3cold is off the bus by definition. IIUC the problem is that the sequence D0 -> D3hot -> D3cold -> D0 for the GPU fails in the transition back to D0, while D0 -> D3cold -> D0 works fine. So I guess the problem is that we can put the device in D3cold with no problem, but if we put in D3hot before going to D3cold, the device never comes back to D0. Right? > + * This leads to various issue different issues which all manifest > differently, s/issue different// Actually, I think there's only one underlying issue with several manifestations. > + * but have the same root cause: > + * - AIML code execution hits an infinite loop (as the coe waits on device > + *memory to change). s/AIML/AML/ s/coe/code/ > + * - kernel crashes, as all pci reads return -1, which most code isn't able > + *to handle well enough. s/pci/PCI/ More details about these crashes would be useful as we look at places that *should* be able to handle errors like this. > + * - sudden shutdowns, as the kernel identified an unrecoverable error after > + *userspace tries to access the GPU. This doesn't fit with the others and more details might be informative here as well. > + * In all cases dmesg will contain at least one line like this: > + * 'nouveau :01:00.0: Refused to change power state, currently in D3' > + * followed by a lot of nouveau timeouts. > + * > + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the > + * PCIe bridge controller in order to power down the GPU. > + * Nonetheless, there are other code paths inside the ACPI firmware which use > + * other registers, which seem to work fine: > + * - 0xbc bit 0x20 (publicly available documentation claims 'reserved') > + * - 0xb0 bit 0x10 (link disable) All these register addresses are device-specific, so they're useless without identifying the device. "lspci -vvnn" output would let us at least connect this with something. It would be nice to have that info archived along with your acpidump and python repro scripts in a bugzilla with the URL in the commit log. These are likely in PCI capabilities. If I make the leap of assuming the "link disable" bit is PCI_EXP_LNKCTL_LD, that would mean the Link Control register is at 0xb0 and the register at 0xbc would be the Root Control register, and indeed 0x20 in Root Control is reserved. I don't know what the relevance of all this is, though. It's not remarkable that accesses to these registers work. Unless you mean you can access these registers *after* trying to put the device back in D0, but o
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Thu, 17 Oct 2019 at 22:19, Karol Herbst wrote: > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device > states. Can we get this acked/committed? At this stage I think we've done all we can unless Intel actually escalate this internally and work out how the hw is broken. Dave. > > v2: convert to pci_dev quirk > put a proper technical explanation of the issue as a in-code comment > v3: disable it only for certain combinations of intel and nvidia hardware > v4: simplify quirk by setting flag on the GPU itself > > Signed-off-by: Karol Herbst > Cc: Bjorn Helgaas > Cc: Lyude Paul > Cc: Rafael J. Wysocki > Cc: Mika Westerberg > Cc: linux-...@vger.kernel.org > Cc: linux...@vger.kernel.org > Cc: dri-devel@lists.freedesktop.org > Cc: nouv...@lists.freedesktop.org > --- > drivers/pci/pci.c| 7 ++ > drivers/pci/quirks.c | 53 > include/linux/pci.h | 1 + > 3 files changed, 61 insertions(+) > > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c > index b97d9e10c9cc..02e71e0bcdd7 100644 > --- a/drivers/pci/pci.c > +++ b/drivers/pci/pci.c > @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, > pci_power_t state) >|| (state == PCI_D2 && !dev->d2_support)) > return -EIO; > > + /* > +* check if we have a bad combination of bridge controller and nvidia > + * GPU, see quirk_broken_nv_runpm for more info > +*/ > + if (state != PCI_D0 && dev->broken_nv_runpm) > + return 0; > + > pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); > > /* > diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c > index 44c4ae1abd00..0006c9e37b6f 100644 > --- a/drivers/pci/quirks.c > +++ b/drivers/pci/quirks.c > @@ -5268,3 +5268,56 @@ static void > quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev) > DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1, > PCI_CLASS_DISPLAY_VGA, 8, > quirk_reset_lenovo_thinkpad_p50_nvgpu); > + > +/* > + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after > + * those were put into D3cold state if they were put into a non D0 PCI PM > + * device state before doing so. > + * > + * This leads to various issue different issues which all manifest > differently, > + * but have the same root cause: > + * - AIML code execution hits an infinite loop (as the coe waits on device > + *memory to change). > + * - kernel crashes, as all pci reads return -1, which most code isn't able > + *to handle well enough. > + * - sudden shutdowns, as the kernel identified an unrecoverable error after > + *userspace tries to access the GPU. > + * > + * In all cases dmesg will contain at least one line like this: > + * 'nouveau :01:00.0: Refused to change power state, currently in D3' > + * followed by a lot of nouveau timeouts. > + * > + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the > + * PCIe bridge controller in order to power down the GPU. > + * Nonetheless, there are other code paths inside the ACPI firmware which use > + * other registers, which seem to work fine: > + * - 0xbc bit 0x20 (publicly available documentation claims 'reserved') > + * - 0xb0 bit 0x10 (link disable) > + * Changing the conditions inside the firmware by poking into the relevant > + * addresses does resolve the issue, but it seemed to be ACPI private memory > + * and not any device accessible memory at all, so there is no portable way > of > + * changing the conditions. > + * > + * The only systems where this behavior can be seen are hybrid graphics > laptops > + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue > + * only occurs in combination with listed Intel PCIe bridge controllers and > + * the mentioned GPUs or if it's only a hw bug in the bridge controller. > + * > + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU > + * and an Intel Coffee Lake SoC, there is a higher chance of there being a > bug > + * in the bridge controller rather than in the GPU. > + * > + * This issue was not able to be reproduced on non laptop systems. > + */ > + > +static void quirk_broken_nv_runpm(struct pci_dev *dev) > +{ > + struct pci_dev *bridge = pci_upstream_bridge(dev); > + > + if (bridge->vendor == PCI_VENDOR_ID_INTEL && > + bridge->device == 0x1901) > + dev->broken_nv_runpm = 1; > +} > +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, > + PCI_BASE_CLASS_DISPLAY, 16, > + quirk_broken_nv_runpm); > diff --git a/include/linux/pci.h b/include/linux/pci.h > index ac8a6c4e1792..903a0b3a39ec 100644 > --- a/include/linux/pci.h > +++ b/include/linux/pci.h > @@ -416,6 +416,7 @@ struct pci_dev { > unsigned int__aer_firmware_first_valid:1;
Re: [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
ping on the patch. I wasn't able to verify this issue on any other bridge controller, so it really might be only this one. On Thu, Oct 17, 2019 at 2:19 PM Karol Herbst wrote: > > Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device > states. > > v2: convert to pci_dev quirk > put a proper technical explanation of the issue as a in-code comment > v3: disable it only for certain combinations of intel and nvidia hardware > v4: simplify quirk by setting flag on the GPU itself > > Signed-off-by: Karol Herbst > Cc: Bjorn Helgaas > Cc: Lyude Paul > Cc: Rafael J. Wysocki > Cc: Mika Westerberg > Cc: linux-...@vger.kernel.org > Cc: linux...@vger.kernel.org > Cc: dri-devel@lists.freedesktop.org > Cc: nouv...@lists.freedesktop.org > --- > drivers/pci/pci.c| 7 ++ > drivers/pci/quirks.c | 53 > include/linux/pci.h | 1 + > 3 files changed, 61 insertions(+) > > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c > index b97d9e10c9cc..02e71e0bcdd7 100644 > --- a/drivers/pci/pci.c > +++ b/drivers/pci/pci.c > @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, > pci_power_t state) >|| (state == PCI_D2 && !dev->d2_support)) > return -EIO; > > + /* > +* check if we have a bad combination of bridge controller and nvidia > + * GPU, see quirk_broken_nv_runpm for more info > +*/ > + if (state != PCI_D0 && dev->broken_nv_runpm) > + return 0; > + > pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); > > /* > diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c > index 44c4ae1abd00..0006c9e37b6f 100644 > --- a/drivers/pci/quirks.c > +++ b/drivers/pci/quirks.c > @@ -5268,3 +5268,56 @@ static void > quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev) > DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1, > PCI_CLASS_DISPLAY_VGA, 8, > quirk_reset_lenovo_thinkpad_p50_nvgpu); > + > +/* > + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after > + * those were put into D3cold state if they were put into a non D0 PCI PM > + * device state before doing so. > + * > + * This leads to various issue different issues which all manifest > differently, > + * but have the same root cause: > + * - AIML code execution hits an infinite loop (as the coe waits on device > + *memory to change). > + * - kernel crashes, as all pci reads return -1, which most code isn't able > + *to handle well enough. > + * - sudden shutdowns, as the kernel identified an unrecoverable error after > + *userspace tries to access the GPU. > + * > + * In all cases dmesg will contain at least one line like this: > + * 'nouveau :01:00.0: Refused to change power state, currently in D3' > + * followed by a lot of nouveau timeouts. > + * > + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the > + * PCIe bridge controller in order to power down the GPU. > + * Nonetheless, there are other code paths inside the ACPI firmware which use > + * other registers, which seem to work fine: > + * - 0xbc bit 0x20 (publicly available documentation claims 'reserved') > + * - 0xb0 bit 0x10 (link disable) > + * Changing the conditions inside the firmware by poking into the relevant > + * addresses does resolve the issue, but it seemed to be ACPI private memory > + * and not any device accessible memory at all, so there is no portable way > of > + * changing the conditions. > + * > + * The only systems where this behavior can be seen are hybrid graphics > laptops > + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue > + * only occurs in combination with listed Intel PCIe bridge controllers and > + * the mentioned GPUs or if it's only a hw bug in the bridge controller. > + * > + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU > + * and an Intel Coffee Lake SoC, there is a higher chance of there being a > bug > + * in the bridge controller rather than in the GPU. > + * > + * This issue was not able to be reproduced on non laptop systems. > + */ > + > +static void quirk_broken_nv_runpm(struct pci_dev *dev) > +{ > + struct pci_dev *bridge = pci_upstream_bridge(dev); > + > + if (bridge->vendor == PCI_VENDOR_ID_INTEL && > + bridge->device == 0x1901) > + dev->broken_nv_runpm = 1; > +} > +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, > + PCI_BASE_CLASS_DISPLAY, 16, > + quirk_broken_nv_runpm); > diff --git a/include/linux/pci.h b/include/linux/pci.h > index ac8a6c4e1792..903a0b3a39ec 100644 > --- a/include/linux/pci.h > +++ b/include/linux/pci.h > @@ -416,6 +416,7 @@ struct pci_dev { > unsigned int__aer_firmware_first_valid:1; > unsigned int__aer_firmware
[PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device states. v2: convert to pci_dev quirk put a proper technical explanation of the issue as a in-code comment v3: disable it only for certain combinations of intel and nvidia hardware v4: simplify quirk by setting flag on the GPU itself Signed-off-by: Karol Herbst Cc: Bjorn Helgaas Cc: Lyude Paul Cc: Rafael J. Wysocki Cc: Mika Westerberg Cc: linux-...@vger.kernel.org Cc: linux...@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: nouv...@lists.freedesktop.org --- drivers/pci/pci.c| 7 ++ drivers/pci/quirks.c | 53 include/linux/pci.h | 1 + 3 files changed, 61 insertions(+) diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index b97d9e10c9cc..02e71e0bcdd7 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -850,6 +850,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state) || (state == PCI_D2 && !dev->d2_support)) return -EIO; + /* +* check if we have a bad combination of bridge controller and nvidia + * GPU, see quirk_broken_nv_runpm for more info +*/ + if (state != PCI_D0 && dev->broken_nv_runpm) + return 0; + pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); /* diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index 44c4ae1abd00..0006c9e37b6f 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -5268,3 +5268,56 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev) DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1, PCI_CLASS_DISPLAY_VGA, 8, quirk_reset_lenovo_thinkpad_p50_nvgpu); + +/* + * Some Intel PCIe bridges cause devices to disappear from the PCIe bus after + * those were put into D3cold state if they were put into a non D0 PCI PM + * device state before doing so. + * + * This leads to various issue different issues which all manifest differently, + * but have the same root cause: + * - AIML code execution hits an infinite loop (as the coe waits on device + *memory to change). + * - kernel crashes, as all pci reads return -1, which most code isn't able + *to handle well enough. + * - sudden shutdowns, as the kernel identified an unrecoverable error after + *userspace tries to access the GPU. + * + * In all cases dmesg will contain at least one line like this: + * 'nouveau :01:00.0: Refused to change power state, currently in D3' + * followed by a lot of nouveau timeouts. + * + * ACPI code writes bit 0x80 to the not documented PCI register 0x248 of the + * PCIe bridge controller in order to power down the GPU. + * Nonetheless, there are other code paths inside the ACPI firmware which use + * other registers, which seem to work fine: + * - 0xbc bit 0x20 (publicly available documentation claims 'reserved') + * - 0xb0 bit 0x10 (link disable) + * Changing the conditions inside the firmware by poking into the relevant + * addresses does resolve the issue, but it seemed to be ACPI private memory + * and not any device accessible memory at all, so there is no portable way of + * changing the conditions. + * + * The only systems where this behavior can be seen are hybrid graphics laptops + * with a secondary Nvidia Pascal GPU. It cannot be ruled out that this issue + * only occurs in combination with listed Intel PCIe bridge controllers and + * the mentioned GPUs or if it's only a hw bug in the bridge controller. + * + * But because this issue was NOT seen on laptops with an Nvidia Pascal GPU + * and an Intel Coffee Lake SoC, there is a higher chance of there being a bug + * in the bridge controller rather than in the GPU. + * + * This issue was not able to be reproduced on non laptop systems. + */ + +static void quirk_broken_nv_runpm(struct pci_dev *dev) +{ + struct pci_dev *bridge = pci_upstream_bridge(dev); + + if (bridge->vendor == PCI_VENDOR_ID_INTEL && + bridge->device == 0x1901) + dev->broken_nv_runpm = 1; +} +DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, + PCI_BASE_CLASS_DISPLAY, 16, + quirk_broken_nv_runpm); diff --git a/include/linux/pci.h b/include/linux/pci.h index ac8a6c4e1792..903a0b3a39ec 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -416,6 +416,7 @@ struct pci_dev { unsigned int__aer_firmware_first_valid:1; unsigned int__aer_firmware_first:1; unsigned intbroken_intx_masking:1; /* INTx masking can't be used */ + unsigned intbroken_nv_runpm:1; /* some combinations of intel bridge controller and nvidia GPUs break rtd3 */ unsigned intio_window_1k:1; /* Intel bridge 1K I/O windows */ unsigned intirq_managed:1; unsigned inthas_secondary_link:1; -- 2.21.0 _