Re: [Nouveau] [PATCH v4 1/1] drm: allow limiting the scatter list size.

2020-09-07 Thread Gerd Hoffmann
On Mon, Sep 07, 2020 at 03:53:02PM +0200, Daniel Vetter wrote:
> On Mon, Sep 7, 2020 at 1:24 PM Gerd Hoffmann  wrote:
> >
> > Add drm_device argument to drm_prime_pages_to_sg(), so we can
> > call dma_max_mapping_size() to figure the segment size limit
> > and call into __sg_alloc_table_from_pages() with the correct
> > limit.
> >
> > This fixes virtio-gpu with sev.  Possibly it'll fix other bugs
> > too given that drm seems to totaly ignore segment size limits
> > so far ...
> >
> > v2: place max_segment in drm driver not gem object.
> > v3: move max_segment next to the other gem fields.
> > v4: just use dma_max_mapping_size().
> >
> > Signed-off-by: Gerd Hoffmann 
> 
> Uh, are you sure this works in all cases for virtio?

Sure, I've tested it ;)

> The comments I've found suggest very much not ... Or is that all very
> old stuff only that no one cares about anymore?

I think these days it is possible to override dma_ops per device, which
in turn allows virtio to deal with the quirks without the rest of the
kernel knowing about these details.

I also think virtio-gpu can drop the virtio_has_dma_quirk() checks, just
use the dma api path unconditionally and depend on virtio core having
setup dma_ops in a way that it JustWorks[tm].  I'll look into that next.

take care,
  Gerd

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] pcieport 0000:00:01.0: PME: Spurious native interrupt (nvidia with nouveau and thunderbolt on thinkpad P73)

2020-09-07 Thread Marc MERLIN
On Tue, Sep 08, 2020 at 01:51:19AM +0200, Karol Herbst wrote:
> oh, I somehow missed that "disp ctor failed" message. I think that
> might explain why things are a bit hanging. From the top of my head I
> am not sure if that's something known or something new. But just in
> case I CCed Lyude and Ben. And I think booting with
> nouveau.debug=disp=trace could already show something relevant.

Thanks.
I've added that to my boot for next time I reboot.

I'm moving some folks to Bcc now, and let's remove the lists other than
nouveau on followups (lkml and pci). I'm just putting a warning here
so that it shows up in other list archives and anyone finding this
later knows that they should look in the nouveau archives for further
updates/resolution.

Thanks,
Marc
-- 
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
 
Home page: http://marc.merlins.org/   | PGP 7F55D5F27AAF9D08
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] pcieport 0000:00:01.0: PME: Spurious native interrupt (nvidia with nouveau and thunderbolt on thinkpad P73)

2020-09-07 Thread Karol Herbst
On Mon, Sep 7, 2020 at 10:58 PM Marc MERLIN  wrote:
>
> On Mon, Sep 07, 2020 at 09:14:03PM +0200, Karol Herbst wrote:
> > > - changes in the nouveau driver. Mika told me the PCIe regression
> > >   "pcieport :00:01.0: PME: Spurious native interrupt!" is supposed
> > >   to be fixed in 5.8, but I still get a 4mn hang or so during boot and
> > >   with 5.8, removing the USB key, didn't help make the boot faster
> >
> > that's the root port the GPU is attached to, no? I saw that message on
> > the Thinkpad P1G2 when runtime resuming the Nvidia GPU, but it does
> > seem to come from the root port.
>
> Hi Karol, thanks for your answer.
>
> 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core 
> Processor PCIe Controller (x16) (rev 0d)
> 01:00.0 VGA compatible controller: NVIDIA Corporation TU104GLM [Quadro RTX 
> 4000 Mobile / Max-Q] (rev a1)
>
> > Well, you'd also need it when attaching external displays.
>
> Indeed. I just don't need that on this laptop, but familiar with the not
> so seemless procedure to turn on both GPUs, and mirror the intel one into
> the nvidia one for external output.
>
> > > [   11.262985] nvidia-gpu :01:00.3: PME# enabled
> > > [   11.303060] nvidia-gpu :01:00.3: PME# disabled
> >
> > mhh, interesting. I heard some random comments that the Nvidia
> > USB-C/UCSI driver is a bit broken and can cause various issues. Mind
> > blacklisting i2c-nvidia-gpu and typec_nvidia (and verify they don't
> > get loaded) and see if that helps?
>
> Right, this one:
> 01:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C 
> UCSI Controller (rev a1)
> Sure, I'll blacklist it. Ok, just did that, removed from initrd,
> rebooted, and it was no better.
>
> From initrd (before root gets mounted), I have this:
> nouveau  1961984  0
> mxm_wmi16384  1 nouveau
> hwmon  32768  1 nouveau
> ttm   102400  1 nouveau
> wmi32768  2 nouveau,mxm_wmi
>
> I still got a 2mn hang. and a nouveau probe error
> [  189.124530] nouveau: probe of :01:00.0 failed with error -12
>
>
> Here's what it looks like:
> [9.693230] hid: raw HID events driver (C) Jiri Kosina
> [9.694988] usbcore: registered new interface driver usbhid
> [9.694989] usbhid: USB HID core driver
> [9.696700] hid-generic 0003:1050:0200.0001: hiddev0,hidraw0: USB HID 
> v1.00 Device [Yubico Yubico Gnubby (gnubby1)] on usb-:00:14.0-2/input0
> [9.784456] Console: switching to colour frame buffer device 240x67
> [9.816297] i915 :00:02.0: fb0: i915drmfb frame buffer device
> [   25.087400] thunderbolt :06:00.0: saving config space at offset 0x0 
> (reading 0x15eb8086)
> [   25.087414] thunderbolt :06:00.0: saving config space at offset 0x4 
> (reading 0x100406)
> [   25.087419] thunderbolt :06:00.0: saving config space at offset 0x8 
> (reading 0x886)
> [   25.087424] thunderbolt :06:00.0: saving config space at offset 0xc 
> (reading 0x20)
> [   25.087430] thunderbolt :06:00.0: saving config space at offset 0x10 
> (reading 0xcc10)
> [   25.087435] thunderbolt :06:00.0: saving config space at offset 0x14 
> (reading 0xcc14)
> [   25.087440] thunderbolt :06:00.0: saving config space at offset 0x18 
> (reading 0x0)
> [   25.087445] thunderbolt :06:00.0: saving config space at offset 0x1c 
> (reading 0x0)
> [   25.087450] thunderbolt :06:00.0: saving config space at offset 0x20 
> (reading 0x0)
> [   25.087455] thunderbolt :06:00.0: saving config space at offset 0x24 
> (reading 0x0)
> [   25.087460] thunderbolt :06:00.0: saving config space at offset 0x28 
> (reading 0x0)
> [   25.087466] thunderbolt :06:00.0: saving config space at offset 0x2c 
> (reading 0x229b17aa)
> [   25.087471] thunderbolt :06:00.0: saving config space at offset 0x30 
> (reading 0x0)
> [   25.087476] thunderbolt :06:00.0: saving config space at offset 0x34 
> (reading 0x80)
> [   25.087481] thunderbolt :06:00.0: saving config space at offset 0x38 
> (reading 0x0)
> [   25.087486] thunderbolt :06:00.0: saving config space at offset 0x3c 
> (reading 0x1ff)
> [   25.087571] thunderbolt :06:00.0: PME# enabled
> [   25.105353] pcieport :05:00.0: saving config space at offset 0x0 
> (reading 0x15ea8086)
> [   25.105364] pcieport :05:00.0: saving config space at offset 0x4 
> (reading 0x100407)
> [   25.105370] pcieport :05:00.0: saving config space at offset 0x8 
> (reading 0x6040006)
> [   25.105375] pcieport :05:00.0: saving config space at offset 0xc 
> (reading 0x10020)
> [   25.105380] pcieport :05:00.0: saving config space at offset 0x10 
> (reading 0x0)
> [   25.105384] pcieport :05:00.0: saving config space at offset 0x14 
> (reading 0x0)
> [   25.105389] pcieport :05:00.0: saving config space at offset 0x18 
> (reading 0x60605)
> [   25.105394] pcieport :05:00.0: saving config space at offset 0x1c 
> (reading 

Re: [Nouveau] [PATCH v5 1/2] drm/nouveau/kms/nv50-: Program notifier offset before requesting disp caps

2020-09-07 Thread Ben Skeggs
On Sat, 5 Sep 2020 at 06:28, Lyude Paul  wrote:
>
> Not entirely sure why this never came up when I originally tested this
> (maybe some BIOSes already have this setup?) but the ->caps_init vfunc
> appears to cause the display engine to throw an exception on driver
> init, at least on my ThinkPad P72:
>
> nouveau :01:00.0: disp: chid 0 mthd 008c data  508c 102b
>
> This is magic nvidia speak for "You need to have the DMA notifier offset
> programmed before you can call NV507D_GET_CAPABILITIES." So, let's fix
> this by doing that, and also perform an update afterwards to prevent
> racing with the GPU when reading capabilities.
>
> v2:
> * Don't just program the DMA notifier offset, make sure to actually
>   perform an update
> v3:
> * Don't call UPDATE()
> * Actually read the correct notifier fields, as apparently the
>   CAPABILITIES_DONE field lives in a different location than the main
>   NV_DISP_CORE_NOTIFIER_1 field. As well, 907d+ use a different
>   CAPABILITIES_DONE field then pre-907d cards.
> v4:
> * Don't forget to check the return value of core507d_read_caps()
> v5:
> * Get rid of NV50_DISP_CAPS_NTFY[14], use NV50_DISP_CORE_NTFY
> * Disable notifier after calling GetCapabilities()
>
> Signed-off-by: Lyude Paul 
> Fixes: 4a2cb4181b07 ("drm/nouveau/kms/nv50-: Probe SOR and PIOR caps for DP 
> interlacing support")
> Cc:  # v5.8+
Thanks Lyude, looks good, and merged!

Ben.

> ---
>  drivers/gpu/drm/nouveau/dispnv50/core.h   |  2 +
>  drivers/gpu/drm/nouveau/dispnv50/core507d.c   | 41 ++-
>  drivers/gpu/drm/nouveau/dispnv50/core907d.c   | 36 +++-
>  drivers/gpu/drm/nouveau/dispnv50/core917d.c   |  2 +-
>  .../drm/nouveau/include/nvhw/class/cl507d.h   |  5 ++-
>  .../drm/nouveau/include/nvhw/class/cl907d.h   |  4 ++
>  6 files changed, 85 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/nouveau/dispnv50/core.h 
> b/drivers/gpu/drm/nouveau/dispnv50/core.h
> index 498622c0c670d..f75088186fba3 100644
> --- a/drivers/gpu/drm/nouveau/dispnv50/core.h
> +++ b/drivers/gpu/drm/nouveau/dispnv50/core.h
> @@ -44,6 +44,7 @@ int core507d_new_(const struct nv50_core_func *, struct 
> nouveau_drm *, s32,
>   struct nv50_core **);
>  int core507d_init(struct nv50_core *);
>  void core507d_ntfy_init(struct nouveau_bo *, u32);
> +int core507d_read_caps(struct nv50_disp *disp);
>  int core507d_caps_init(struct nouveau_drm *, struct nv50_disp *);
>  int core507d_ntfy_wait_done(struct nouveau_bo *, u32, struct nvif_device *);
>  int core507d_update(struct nv50_core *, u32 *, bool);
> @@ -55,6 +56,7 @@ extern const struct nv50_outp_func pior507d;
>  int core827d_new(struct nouveau_drm *, s32, struct nv50_core **);
>
>  int core907d_new(struct nouveau_drm *, s32, struct nv50_core **);
> +int core907d_caps_init(struct nouveau_drm *drm, struct nv50_disp *disp);
>  extern const struct nv50_outp_func dac907d;
>  extern const struct nv50_outp_func sor907d;
>
> diff --git a/drivers/gpu/drm/nouveau/dispnv50/core507d.c 
> b/drivers/gpu/drm/nouveau/dispnv50/core507d.c
> index 248edf69e1683..e6f16a7750f07 100644
> --- a/drivers/gpu/drm/nouveau/dispnv50/core507d.c
> +++ b/drivers/gpu/drm/nouveau/dispnv50/core507d.c
> @@ -78,18 +78,55 @@ core507d_ntfy_init(struct nouveau_bo *bo, u32 offset)
>  }
>
>  int
> -core507d_caps_init(struct nouveau_drm *drm, struct nv50_disp *disp)
> +core507d_read_caps(struct nv50_disp *disp)
>  {
> struct nvif_push *push = disp->core->chan.push;
> int ret;
>
> -   if ((ret = PUSH_WAIT(push, 2)))
> +   ret = PUSH_WAIT(push, 6);
> +   if (ret)
> return ret;
>
> +   PUSH_MTHD(push, NV507D, SET_NOTIFIER_CONTROL,
> + NVDEF(NV507D, SET_NOTIFIER_CONTROL, MODE, WRITE) |
> + NVVAL(NV507D, SET_NOTIFIER_CONTROL, OFFSET, 
> NV50_DISP_CORE_NTFY >> 2) |
> + NVDEF(NV507D, SET_NOTIFIER_CONTROL, NOTIFY, ENABLE));
> +
> PUSH_MTHD(push, NV507D, GET_CAPABILITIES, 0x);
> +
> +   PUSH_MTHD(push, NV507D, SET_NOTIFIER_CONTROL,
> + NVDEF(NV507D, SET_NOTIFIER_CONTROL, NOTIFY, DISABLE));
> +
> return PUSH_KICK(push);
>  }
>
> +int
> +core507d_caps_init(struct nouveau_drm *drm, struct nv50_disp *disp)
> +{
> +   struct nv50_core *core = disp->core;
> +   struct nouveau_bo *bo = disp->sync;
> +   s64 time;
> +   int ret;
> +
> +   NVBO_WR32(bo, NV50_DISP_CORE_NTFY, NV_DISP_CORE_NOTIFIER_1, 
> CAPABILITIES_1,
> +NVDEF(NV_DISP_CORE_NOTIFIER_1, 
> CAPABILITIES_1, DONE, FALSE));
> +
> +   ret = core507d_read_caps(disp);
> +   if (ret < 0)
> +   return ret;
> +
> +   time = nvif_msec(core->chan.base.device, 2000ULL,
> +if (NVBO_TD32(bo, NV50_DISP_CORE_NTFY,
> +  NV_DISP_CORE_NOTIFIER_1, 
> CAPABILITIES_1, DONE, ==, TRUE))
> +break;
> +   

Re: [Nouveau] pcieport 0000:00:01.0: PME: Spurious native interrupt (nvidia with nouveau and thunderbolt on thinkpad P73)

2020-09-07 Thread Marc MERLIN
On Mon, Sep 07, 2020 at 09:14:03PM +0200, Karol Herbst wrote:
> > - changes in the nouveau driver. Mika told me the PCIe regression
> >   "pcieport :00:01.0: PME: Spurious native interrupt!" is supposed
> >   to be fixed in 5.8, but I still get a 4mn hang or so during boot and
> >   with 5.8, removing the USB key, didn't help make the boot faster
> 
> that's the root port the GPU is attached to, no? I saw that message on
> the Thinkpad P1G2 when runtime resuming the Nvidia GPU, but it does
> seem to come from the root port.

Hi Karol, thanks for your answer.
 
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core 
Processor PCIe Controller (x16) (rev 0d)
01:00.0 VGA compatible controller: NVIDIA Corporation TU104GLM [Quadro RTX 4000 
Mobile / Max-Q] (rev a1)

> Well, you'd also need it when attaching external displays.
 
Indeed. I just don't need that on this laptop, but familiar with the not
so seemless procedure to turn on both GPUs, and mirror the intel one into
the nvidia one for external output. 

> > [   11.262985] nvidia-gpu :01:00.3: PME# enabled
> > [   11.303060] nvidia-gpu :01:00.3: PME# disabled
> 
> mhh, interesting. I heard some random comments that the Nvidia
> USB-C/UCSI driver is a bit broken and can cause various issues. Mind
> blacklisting i2c-nvidia-gpu and typec_nvidia (and verify they don't
> get loaded) and see if that helps?

Right, this one:
01:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI 
Controller (rev a1)
Sure, I'll blacklist it. Ok, just did that, removed from initrd,
rebooted, and it was no better.

>From initrd (before root gets mounted), I have this:
nouveau  1961984  0
mxm_wmi16384  1 nouveau
hwmon  32768  1 nouveau
ttm   102400  1 nouveau
wmi32768  2 nouveau,mxm_wmi

I still got a 2mn hang. and a nouveau probe error
[  189.124530] nouveau: probe of :01:00.0 failed with error -12


Here's what it looks like:
[9.693230] hid: raw HID events driver (C) Jiri Kosina
[9.694988] usbcore: registered new interface driver usbhid
[9.694989] usbhid: USB HID core driver
[9.696700] hid-generic 0003:1050:0200.0001: hiddev0,hidraw0: USB HID v1.00 
Device [Yubico Yubico Gnubby (gnubby1)] on usb-:00:14.0-2/input0
[9.784456] Console: switching to colour frame buffer device 240x67
[9.816297] i915 :00:02.0: fb0: i915drmfb frame buffer device
[   25.087400] thunderbolt :06:00.0: saving config space at offset 0x0 
(reading 0x15eb8086)
[   25.087414] thunderbolt :06:00.0: saving config space at offset 0x4 
(reading 0x100406)
[   25.087419] thunderbolt :06:00.0: saving config space at offset 0x8 
(reading 0x886)
[   25.087424] thunderbolt :06:00.0: saving config space at offset 0xc 
(reading 0x20)
[   25.087430] thunderbolt :06:00.0: saving config space at offset 0x10 
(reading 0xcc10)
[   25.087435] thunderbolt :06:00.0: saving config space at offset 0x14 
(reading 0xcc14)
[   25.087440] thunderbolt :06:00.0: saving config space at offset 0x18 
(reading 0x0)
[   25.087445] thunderbolt :06:00.0: saving config space at offset 0x1c 
(reading 0x0)
[   25.087450] thunderbolt :06:00.0: saving config space at offset 0x20 
(reading 0x0)
[   25.087455] thunderbolt :06:00.0: saving config space at offset 0x24 
(reading 0x0)
[   25.087460] thunderbolt :06:00.0: saving config space at offset 0x28 
(reading 0x0)
[   25.087466] thunderbolt :06:00.0: saving config space at offset 0x2c 
(reading 0x229b17aa)
[   25.087471] thunderbolt :06:00.0: saving config space at offset 0x30 
(reading 0x0)
[   25.087476] thunderbolt :06:00.0: saving config space at offset 0x34 
(reading 0x80)
[   25.087481] thunderbolt :06:00.0: saving config space at offset 0x38 
(reading 0x0)
[   25.087486] thunderbolt :06:00.0: saving config space at offset 0x3c 
(reading 0x1ff)
[   25.087571] thunderbolt :06:00.0: PME# enabled
[   25.105353] pcieport :05:00.0: saving config space at offset 0x0 
(reading 0x15ea8086)
[   25.105364] pcieport :05:00.0: saving config space at offset 0x4 
(reading 0x100407)
[   25.105370] pcieport :05:00.0: saving config space at offset 0x8 
(reading 0x6040006)
[   25.105375] pcieport :05:00.0: saving config space at offset 0xc 
(reading 0x10020)
[   25.105380] pcieport :05:00.0: saving config space at offset 0x10 
(reading 0x0)
[   25.105384] pcieport :05:00.0: saving config space at offset 0x14 
(reading 0x0)
[   25.105389] pcieport :05:00.0: saving config space at offset 0x18 
(reading 0x60605)
[   25.105394] pcieport :05:00.0: saving config space at offset 0x1c 
(reading 0x1f1)
[   25.105399] pcieport :05:00.0: saving config space at offset 0x20 
(reading 0xcc10cc10)
[   25.105404] pcieport :05:00.0: saving config space at offset 0x24 
(reading 0x1fff1)
[   25.105409] pcieport :05:00.0: saving config 

Re: [Nouveau] pcieport 0000:00:01.0: PME: Spurious native interrupt (nvidia with nouveau and thunderbolt on thinkpad P73)

2020-09-07 Thread Karol Herbst
On Sun, Sep 6, 2020 at 8:52 PM Marc MERLIN  wrote:
>
> Ok, I have an update to this problem. I added the nouveau list because
> I can't quite tell if the issue is:
> - the PCIe changes that went in 5.6 I think (or 5.5?), referenced below
>
> - a new issue with thunderbold on thinkpad P73, that seems to be
>   triggered if I have a USB-C yubikey in the port. With 5.7, my issues
>   went away if I removed the USB key during boot, showing an interaction
>   between nouveau and thunderbolt
>
> - changes in the nouveau driver. Mika told me the PCIe regression
>   "pcieport :00:01.0: PME: Spurious native interrupt!" is supposed
>   to be fixed in 5.8, but I still get a 4mn hang or so during boot and
>   with 5.8, removing the USB key, didn't help make the boot faster
>

that's the root port the GPU is attached to, no? I saw that message on
the Thinkpad P1G2 when runtime resuming the Nvidia GPU, but it does
seem to come from the root port.

> I don't otherwise use the nvidia chip I so wish I didn't have, I only
> use intel graphics on that laptop, but I must apparently use the nouveau
> driver to manage the nouveau chip so that it's turned off and not
> burning 60W doing nothing.
>

Well, you'd also need it when attaching external displays.

> lspci is in the quoted message below, I won't copy it here again, but
> here's the nvidia bit:
> 01:00.0 VGA compatible controller: NVIDIA Corporation TU104GLM [Quadro RTX 
> 4000 Mobile / Max-Q] (rev a1)
> 01:00.1 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1)
> 01:00.2 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev 
> a1)
> 01:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C 
> UCSI Controller (rev a1)
>
> Here are 5 boots, 4 on 5.8.5:
>
> dmesg.1_hang_but_no_warning.txt https://pastebin.com/Y5NaH08n
> Boot hung for quite a while, but no clear output
>
> dmesg.2_pme_spurious.txt https://pastebin.com/dX19aCpj
> [8.185808] nvidia-gpu :01:00.3: runtime IRQ mapping not provided by 
> arch
> [8.185989] nvidia-gpu :01:00.3: enabling device ( -> 0002)
> [8.188986] nvidia-gpu :01:00.3: enabling bus mastering
> [   11.936507] nvidia-gpu :01:00.3: PME# enabled
> [   11.975985] nvidia-gpu :01:00.3: PME# disabled
> [   11.976011] pcieport :00:01.0: PME: Spurious native interrupt!
>
> dmesg.3_usb_key_yanked.txt https://pastebin.com/m7QLnCZt
> I yanked the USB key during boot, that seemed to help unlock things with
> 5.7, but did not with 5.8. It's hung on a loop of:
> [   11.262854] nvidia-gpu :01:00.3: saving config space at offset 0x0 
> (reading 0x1ad910de)
> [   11.262863] nvidia-gpu :01:00.3: saving config space at offset 0x4 
> (reading 0x100406)
> [   11.262869] nvidia-gpu :01:00.3: saving config space at offset 0x8 
> (reading 0xc8000a1)
> [   11.262874] nvidia-gpu :01:00.3: saving config space at offset 0xc 
> (reading 0x80)
> [   11.262880] nvidia-gpu :01:00.3: saving config space at offset 0x10 
> (reading 0xce054000)
> [   11.262885] nvidia-gpu :01:00.3: saving config space at offset 0x14 
> (reading 0x0)
> [   11.262890] nvidia-gpu :01:00.3: saving config space at offset 0x18 
> (reading 0x0)
> [   11.262895] nvidia-gpu :01:00.3: saving config space at offset 0x1c 
> (reading 0x0)
> [   11.262900] nvidia-gpu :01:00.3: saving config space at offset 0x20 
> (reading 0x0)
> [   11.262906] nvidia-gpu :01:00.3: saving config space at offset 0x24 
> (reading 0x0)
> [   11.262911] nvidia-gpu :01:00.3: saving config space at offset 0x28 
> (reading 0x0)
> [   11.262916] nvidia-gpu :01:00.3: saving config space at offset 0x2c 
> (reading 0x229b17aa)
> [   11.262921] nvidia-gpu :01:00.3: saving config space at offset 0x30 
> (reading 0x0)
> [   11.262926] nvidia-gpu :01:00.3: saving config space at offset 0x34 
> (reading 0x68)
> [   11.262931] nvidia-gpu :01:00.3: saving config space at offset 0x38 
> (reading 0x0)
> [   11.262937] nvidia-gpu :01:00.3: saving config space at offset 0x3c 
> (reading 0x4ff)
> [   11.262985] nvidia-gpu :01:00.3: PME# enabled
> [   11.303060] nvidia-gpu :01:00.3: PME# disabled
>

mhh, interesting. I heard some random comments that the Nvidia
USB-C/UCSI driver is a bit broken and can cause various issues. Mind
blacklisting i2c-nvidia-gpu and typec_nvidia (and verify they don't
get loaded) and see if that helps?

> dmesg.4_5.5_boot_fine.txt https://pastebin.com/WXgQTUYP
> reference boot with 4.5, it works fine, no issues
>
> dmesg.5_no_key_still_hang.txt https://pastebin.com/kcT8Ras0
> unfortunately, booting without the USB-C key in thunderbolt, did not
> allow this boot to be faster, it looks different though:
> [6.723454] pcieport :00:01.0: runtime IRQ mapping not provided by arch
> [6.723598] pcieport :00:01.0: PME: Signaling with IRQ 122
> [6.724011] pcieport :00:01.0: saving config space at offset 0x0 
> (reading 0x19018086)
> [6.724016] 

[Nouveau] [PATCH v4 1/1] drm: allow limiting the scatter list size.

2020-09-07 Thread Gerd Hoffmann
Add drm_device argument to drm_prime_pages_to_sg(), so we can
call dma_max_mapping_size() to figure the segment size limit
and call into __sg_alloc_table_from_pages() with the correct
limit.

This fixes virtio-gpu with sev.  Possibly it'll fix other bugs
too given that drm seems to totaly ignore segment size limits
so far ...

v2: place max_segment in drm driver not gem object.
v3: move max_segment next to the other gem fields.
v4: just use dma_max_mapping_size().

Signed-off-by: Gerd Hoffmann 
---
 include/drm/drm_prime.h |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  3 ++-
 drivers/gpu/drm/drm_gem_shmem_helper.c  |  2 +-
 drivers/gpu/drm/drm_prime.c | 13 ++---
 drivers/gpu/drm/etnaviv/etnaviv_gem.c   |  3 ++-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  2 +-
 drivers/gpu/drm/msm/msm_gem.c   |  2 +-
 drivers/gpu/drm/msm/msm_gem_prime.c |  2 +-
 drivers/gpu/drm/nouveau/nouveau_prime.c |  2 +-
 drivers/gpu/drm/radeon/radeon_prime.c   |  2 +-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  5 +++--
 drivers/gpu/drm/tegra/gem.c |  2 +-
 drivers/gpu/drm/vgem/vgem_drv.c |  2 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.c |  3 ++-
 14 files changed, 29 insertions(+), 17 deletions(-)

diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index 9af7422b44cf..bf141e74a1c2 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -88,7 +88,8 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void 
*vaddr);
 int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma);
 
-struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int 
nr_pages);
+struct sg_table *drm_prime_pages_to_sg(struct drm_device *dev,
+  struct page **pages, unsigned int 
nr_pages);
 struct dma_buf *drm_gem_prime_export(struct drm_gem_object *obj,
 int flags);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 519ce4427fce..d7050ab95946 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -302,7 +302,8 @@ static struct sg_table *amdgpu_dma_buf_map(struct 
dma_buf_attachment *attach,
 
switch (bo->tbo.mem.mem_type) {
case TTM_PL_TT:
-   sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages,
+   sgt = drm_prime_pages_to_sg(obj->dev,
+   bo->tbo.ttm->pages,
bo->tbo.num_pages);
if (IS_ERR(sgt))
return sgt;
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 4b7cfbac4daa..0a952f27c184 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -656,7 +656,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct 
drm_gem_object *obj)
 
WARN_ON(shmem->base.import_attach);
 
-   return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT);
+   return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> 
PAGE_SHIFT);
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 1693aa7c14b5..8a6a3c99b7d8 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -802,9 +802,11 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = 
 {
  *
  * This is useful for implementing _gem_object_funcs.get_sg_table.
  */
-struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int 
nr_pages)
+struct sg_table *drm_prime_pages_to_sg(struct drm_device *dev,
+  struct page **pages, unsigned int 
nr_pages)
 {
struct sg_table *sg = NULL;
+   size_t max_segment = 0;
int ret;
 
sg = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
@@ -813,8 +815,13 @@ struct sg_table *drm_prime_pages_to_sg(struct page 
**pages, unsigned int nr_page
goto out;
}
 
-   ret = sg_alloc_table_from_pages(sg, pages, nr_pages, 0,
-   nr_pages << PAGE_SHIFT, GFP_KERNEL);
+   if (dev)
+   max_segment = dma_max_mapping_size(dev->dev);
+   if (max_segment == 0 || max_segment > SCATTERLIST_MAX_SEGMENT)
+   max_segment = SCATTERLIST_MAX_SEGMENT;
+   ret = __sg_alloc_table_from_pages(sg, pages, nr_pages, 0,
+ nr_pages << PAGE_SHIFT,
+ max_segment, GFP_KERNEL);
if (ret)
goto out;
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c 
b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index f06e19e7be04..ea19f1d27275 100644
--- 

Re: [Nouveau] [PATCH v2 1/2] drm: allow limiting the scatter list size.

2020-09-07 Thread Gerd Hoffmann
> > +   /**
> > +* @max_segment:
> > +*
> > +* Max size for scatter list segments.  When unset the default
> > +* (SCATTERLIST_MAX_SEGMENT) is used.
> > +*/
> > +   size_t max_segment;
> 
> Is there no better place for this then "at the bottom"? drm_device is a
> huge structure, piling stuff up randomly doesn't make it better :-)

Moved next to the other gem fields for now (v3 posted).

> I think ideally we'd have a gem substruct like we have on the modeset side
> at least.

Phew, that'll be quite some churn in the tree.  And there aren't that many
gem-related fields in struct drm_device.

So you are looking for something like below (header changes only)?

take care,
  Gerd

diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
index c455ef404ca6..950167ede98a 100644
--- a/include/drm/drm_device.h
+++ b/include/drm/drm_device.h
@@ -299,22 +299,8 @@ struct drm_device {
/** @mode_config: Current mode config */
struct drm_mode_config mode_config;
 
-   /** @object_name_lock: GEM information */
-   struct mutex object_name_lock;
-
-   /** @object_name_idr: GEM information */
-   struct idr object_name_idr;
-
-   /** @vma_offset_manager: GEM information */
-   struct drm_vma_offset_manager *vma_offset_manager;
-
-   /**
-* @max_segment:
-*
-* Max size for scatter list segments for GEM objects.  When
-* unset the default (SCATTERLIST_MAX_SEGMENT) is used.
-*/
-   size_t max_segment;
+   /** @gem_config: Current GEM config */
+   struct drm_gem_config gem_config;
 
/** @vram_mm: VRAM MM memory manager */
struct drm_vram_mm *vram_mm;
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 337a48321705..74129fb29fb8 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -39,6 +39,25 @@
 
 #include 
 
+struct drm_gem_config {
+   /** @object_name_lock: GEM information */
+   struct mutex object_name_lock;
+
+   /** @object_name_idr: GEM information */
+   struct idr object_name_idr;
+
+   /** @vma_offset_manager: GEM information */
+   struct drm_vma_offset_manager *vma_offset_manager;
+
+   /**
+* @max_segment:
+*
+* Max size for scatter list segments for GEM objects.  When
+* unset the default (SCATTERLIST_MAX_SEGMENT) is used.
+*/
+   size_t max_segment;
+};
+
 struct drm_gem_object;
 
 /**

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] [PATCH v2 1/2] drm: allow limiting the scatter list size.

2020-09-07 Thread Daniel Vetter
On Mon, Sep 7, 2020 at 8:39 AM Gerd Hoffmann  wrote:
>
> > > +   /**
> > > +* @max_segment:
> > > +*
> > > +* Max size for scatter list segments.  When unset the default
> > > +* (SCATTERLIST_MAX_SEGMENT) is used.
> > > +*/
> > > +   size_t max_segment;
> >
> > Is there no better place for this then "at the bottom"? drm_device is a
> > huge structure, piling stuff up randomly doesn't make it better :-)
>
> Moved next to the other gem fields for now (v3 posted).
>
> > I think ideally we'd have a gem substruct like we have on the modeset side
> > at least.
>
> Phew, that'll be quite some churn in the tree.  And there aren't that many
> gem-related fields in struct drm_device.
>
> So you are looking for something like below (header changes only)?

Hm yeah it's a lot less than I thought. And yes I think that would be neat.
-Daniel

>
> take care,
>   Gerd
>
> diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
> index c455ef404ca6..950167ede98a 100644
> --- a/include/drm/drm_device.h
> +++ b/include/drm/drm_device.h
> @@ -299,22 +299,8 @@ struct drm_device {
> /** @mode_config: Current mode config */
> struct drm_mode_config mode_config;
>
> -   /** @object_name_lock: GEM information */
> -   struct mutex object_name_lock;
> -
> -   /** @object_name_idr: GEM information */
> -   struct idr object_name_idr;
> -
> -   /** @vma_offset_manager: GEM information */
> -   struct drm_vma_offset_manager *vma_offset_manager;
> -
> -   /**
> -* @max_segment:
> -*
> -* Max size for scatter list segments for GEM objects.  When
> -* unset the default (SCATTERLIST_MAX_SEGMENT) is used.
> -*/
> -   size_t max_segment;
> +   /** @gem_config: Current GEM config */
> +   struct drm_gem_config gem_config;
>
> /** @vram_mm: VRAM MM memory manager */
> struct drm_vram_mm *vram_mm;
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index 337a48321705..74129fb29fb8 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -39,6 +39,25 @@
>
>  #include 
>
> +struct drm_gem_config {
> +   /** @object_name_lock: GEM information */
> +   struct mutex object_name_lock;
> +
> +   /** @object_name_idr: GEM information */
> +   struct idr object_name_idr;
> +
> +   /** @vma_offset_manager: GEM information */
> +   struct drm_vma_offset_manager *vma_offset_manager;
> +
> +   /**
> +* @max_segment:
> +*
> +* Max size for scatter list segments for GEM objects.  When
> +* unset the default (SCATTERLIST_MAX_SEGMENT) is used.
> +*/
> +   size_t max_segment;
> +};
> +
>  struct drm_gem_object;
>
>  /**
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] [PATCH v4 1/1] drm: allow limiting the scatter list size.

2020-09-07 Thread Daniel Vetter
On Mon, Sep 7, 2020 at 1:24 PM Gerd Hoffmann  wrote:
>
> Add drm_device argument to drm_prime_pages_to_sg(), so we can
> call dma_max_mapping_size() to figure the segment size limit
> and call into __sg_alloc_table_from_pages() with the correct
> limit.
>
> This fixes virtio-gpu with sev.  Possibly it'll fix other bugs
> too given that drm seems to totaly ignore segment size limits
> so far ...
>
> v2: place max_segment in drm driver not gem object.
> v3: move max_segment next to the other gem fields.
> v4: just use dma_max_mapping_size().
>
> Signed-off-by: Gerd Hoffmann 

Uh, are you sure this works in all cases for virtio? The comments I've
found suggest very much not ... Or is that all very old stuff only
that no one cares about anymore?
-Daniel

> ---
>  include/drm/drm_prime.h |  3 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  3 ++-
>  drivers/gpu/drm/drm_gem_shmem_helper.c  |  2 +-
>  drivers/gpu/drm/drm_prime.c | 13 ++---
>  drivers/gpu/drm/etnaviv/etnaviv_gem.c   |  3 ++-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  2 +-
>  drivers/gpu/drm/msm/msm_gem.c   |  2 +-
>  drivers/gpu/drm/msm/msm_gem_prime.c |  2 +-
>  drivers/gpu/drm/nouveau/nouveau_prime.c |  2 +-
>  drivers/gpu/drm/radeon/radeon_prime.c   |  2 +-
>  drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  5 +++--
>  drivers/gpu/drm/tegra/gem.c |  2 +-
>  drivers/gpu/drm/vgem/vgem_drv.c |  2 +-
>  drivers/gpu/drm/xen/xen_drm_front_gem.c |  3 ++-
>  14 files changed, 29 insertions(+), 17 deletions(-)
>
> diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> index 9af7422b44cf..bf141e74a1c2 100644
> --- a/include/drm/drm_prime.h
> +++ b/include/drm/drm_prime.h
> @@ -88,7 +88,8 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void 
> *vaddr);
>  int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct 
> *vma);
>  int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma);
>
> -struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int 
> nr_pages);
> +struct sg_table *drm_prime_pages_to_sg(struct drm_device *dev,
> +  struct page **pages, unsigned int 
> nr_pages);
>  struct dma_buf *drm_gem_prime_export(struct drm_gem_object *obj,
>  int flags);
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index 519ce4427fce..d7050ab95946 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -302,7 +302,8 @@ static struct sg_table *amdgpu_dma_buf_map(struct 
> dma_buf_attachment *attach,
>
> switch (bo->tbo.mem.mem_type) {
> case TTM_PL_TT:
> -   sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages,
> +   sgt = drm_prime_pages_to_sg(obj->dev,
> +   bo->tbo.ttm->pages,
> bo->tbo.num_pages);
> if (IS_ERR(sgt))
> return sgt;
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
> b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 4b7cfbac4daa..0a952f27c184 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -656,7 +656,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct 
> drm_gem_object *obj)
>
> WARN_ON(shmem->base.import_attach);
>
> -   return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT);
> +   return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> 
> PAGE_SHIFT);
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
>
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 1693aa7c14b5..8a6a3c99b7d8 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -802,9 +802,11 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops 
> =  {
>   *
>   * This is useful for implementing _gem_object_funcs.get_sg_table.
>   */
> -struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int 
> nr_pages)
> +struct sg_table *drm_prime_pages_to_sg(struct drm_device *dev,
> +  struct page **pages, unsigned int 
> nr_pages)
>  {
> struct sg_table *sg = NULL;
> +   size_t max_segment = 0;
> int ret;
>
> sg = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
> @@ -813,8 +815,13 @@ struct sg_table *drm_prime_pages_to_sg(struct page 
> **pages, unsigned int nr_page
> goto out;
> }
>
> -   ret = sg_alloc_table_from_pages(sg, pages, nr_pages, 0,
> -   nr_pages << PAGE_SHIFT, GFP_KERNEL);
> +   if (dev)
> +   max_segment = dma_max_mapping_size(dev->dev);
> +   if (max_segment == 0 || max_segment > SCATTERLIST_MAX_SEGMENT)
> +   

[Nouveau] [PATCH v3 1/2] drm: allow limiting the scatter list size.

2020-09-07 Thread Gerd Hoffmann
Add max_segment argument to drm_prime_pages_to_sg().  When set pass it
through to the __sg_alloc_table_from_pages() call, otherwise use
SCATTERLIST_MAX_SEGMENT.

Also add max_segment field to drm driver and pass it to
drm_prime_pages_to_sg() calls in drivers and helpers.

v2: place max_segment in drm driver not gem object.
v3: move max_segment next to the other gem fields.

Signed-off-by: Gerd Hoffmann 
---
 include/drm/drm_device.h|  8 
 include/drm/drm_prime.h |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  3 ++-
 drivers/gpu/drm/drm_gem_shmem_helper.c  |  3 ++-
 drivers/gpu/drm/drm_prime.c | 10 +++---
 drivers/gpu/drm/etnaviv/etnaviv_gem.c   |  3 ++-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  3 ++-
 drivers/gpu/drm/msm/msm_gem.c   |  3 ++-
 drivers/gpu/drm/msm/msm_gem_prime.c |  3 ++-
 drivers/gpu/drm/nouveau/nouveau_prime.c |  3 ++-
 drivers/gpu/drm/radeon/radeon_prime.c   |  3 ++-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  6 --
 drivers/gpu/drm/tegra/gem.c |  3 ++-
 drivers/gpu/drm/vgem/vgem_drv.c |  3 ++-
 drivers/gpu/drm/xen/xen_drm_front_gem.c |  3 ++-
 15 files changed, 43 insertions(+), 17 deletions(-)

diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
index f4f68e7a9149..c455ef404ca6 100644
--- a/include/drm/drm_device.h
+++ b/include/drm/drm_device.h
@@ -308,6 +308,14 @@ struct drm_device {
/** @vma_offset_manager: GEM information */
struct drm_vma_offset_manager *vma_offset_manager;
 
+   /**
+* @max_segment:
+*
+* Max size for scatter list segments for GEM objects.  When
+* unset the default (SCATTERLIST_MAX_SEGMENT) is used.
+*/
+   size_t max_segment;
+
/** @vram_mm: VRAM MM memory manager */
struct drm_vram_mm *vram_mm;
 
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index 9af7422b44cf..2c3689435cb4 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -88,7 +88,8 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void 
*vaddr);
 int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma);
 
-struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int 
nr_pages);
+struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int 
nr_pages,
+  size_t max_segment);
 struct dma_buf *drm_gem_prime_export(struct drm_gem_object *obj,
 int flags);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 519ce4427fce..8f6a647757e7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -303,7 +303,8 @@ static struct sg_table *amdgpu_dma_buf_map(struct 
dma_buf_attachment *attach,
switch (bo->tbo.mem.mem_type) {
case TTM_PL_TT:
sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages,
-   bo->tbo.num_pages);
+   bo->tbo.num_pages,
+   obj->dev->max_segment);
if (IS_ERR(sgt))
return sgt;
 
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 4b7cfbac4daa..8f47b41b0b2f 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -656,7 +656,8 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct 
drm_gem_object *obj)
 
WARN_ON(shmem->base.import_attach);
 
-   return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT);
+   return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT,
+obj->dev->max_segment);
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 1693aa7c14b5..27c783fd6633 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -802,7 +802,8 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops =  
{
  *
  * This is useful for implementing _gem_object_funcs.get_sg_table.
  */
-struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int 
nr_pages)
+struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int 
nr_pages,
+  size_t max_segment)
 {
struct sg_table *sg = NULL;
int ret;
@@ -813,8 +814,11 @@ struct sg_table *drm_prime_pages_to_sg(struct page 
**pages, unsigned int nr_page
goto out;
}
 
-   ret = sg_alloc_table_from_pages(sg, pages, nr_pages, 0,
-   nr_pages << PAGE_SHIFT, GFP_KERNEL);
+   if (max_segment == 0 || max_segment > 

Re: [Nouveau] [PATCH v5 1/2] drm/nouveau/kms/nv50-: Program notifier offset before requesting disp caps

2020-09-07 Thread Sasha Levin
Hi

[This is an automated email]

This commit has been processed because it contains a "Fixes:" tag
fixing commit: 4a2cb4181b07 ("drm/nouveau/kms/nv50-: Probe SOR and PIOR caps 
for DP interlacing support").

The bot has tested the following trees: v5.8.7.

v5.8.7: Failed to apply! Possible dependencies:
0a96099691c8 ("drm/nouveau/kms/nv50-: implement proper push buffer control 
logic")
0bc8ffe09771 ("drm/nouveau/kms/nv50-: Move hard-coded object handles into 
header")
12885ecbfe62 ("drm/nouveau/kms/nvd9-: Add CRC support")
203f6eaf4182 ("drm/nouveau/kms/nv50-: convert core update() to new push 
macros")
2853ccf09255 ("drm/nouveau/kms/nv50-: wrap existing command submission in 
nvif_push interface")
344c2e5a4796 ("drm/nouveau/kms/nv50-: use NVIDIA's headers for core 
or_ctrl()")
3c43c362b3a5 ("drm/nouveau/kms/nv50-: convert core caps_init() to new push 
macros")
5e691222eac6 ("drm/nouveau/kms/nv50-: convert core init() to new push 
macros")
9ec5e8204053 ("drm/nouveau/kms/nv50-: convert core or_ctrl() to new push 
macros")
b11d8ca151d0 ("drm/nouveau/kms/nv50-: use NVIDIA's headers for core init()")
b505935e56b2 ("drm/nouveau/kms/nv50-: convert core wndw_owner() to new push 
macros")
d8b24526ef68 ("drm/nouveau/kms/nv50-: use NVIDIA's headers for core 
caps_init()")
e79c9a0ba5e7 ("drm/nouveau/nvif: give every mem object a human-readable 
identifier")


NOTE: The patch will not be queued to stable trees until it is upstream.

How should we proceed with this patch?

-- 
Thanks
Sasha
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau