Re: Please remove warnings from msi.c:1269 and blk-mq-pci.c:52 some day

2019-01-15 Thread John Garry

On 14/01/2019 18:05, Jens Axboe wrote:

On 1/14/19 11:05 AM, Christoph Hellwig wrote:

On Mon, Jan 14, 2019 at 11:02:07AM -0700, Keith Busch wrote:

User fin4478  informs me Ming's patch above fixes
the warnings, but responding to the mailing lists from that email account
are blocked for "security or policy reasons" and requested I forward
the successful test result.

Christoph, Jens: may we queue this up for -rc3?


It is in my local queue for -rc3 already.




Any chance we could have this included in linux-next?

Cheers


Great, thanks.






Re: Please remove warnings from msi.c:1269 and blk-mq-pci.c:52 some day

2019-01-14 Thread Jens Axboe
On 1/14/19 11:05 AM, Christoph Hellwig wrote:
> On Mon, Jan 14, 2019 at 11:02:07AM -0700, Keith Busch wrote:
>> User fin4478  informs me Ming's patch above fixes
>> the warnings, but responding to the mailing lists from that email account
>> are blocked for "security or policy reasons" and requested I forward
>> the successful test result.
>>
>> Christoph, Jens: may we queue this up for -rc3?
> 
> It is in my local queue for -rc3 already.

Great, thanks.

-- 
Jens Axboe



Re: Please remove warnings from msi.c:1269 and blk-mq-pci.c:52 some day

2019-01-14 Thread Christoph Hellwig
On Mon, Jan 14, 2019 at 11:02:07AM -0700, Keith Busch wrote:
> User fin4478  informs me Ming's patch above fixes
> the warnings, but responding to the mailing lists from that email account
> are blocked for "security or policy reasons" and requested I forward
> the successful test result.
> 
> Christoph, Jens: may we queue this up for -rc3?

It is in my local queue for -rc3 already.


Re: Please remove warnings from msi.c:1269 and blk-mq-pci.c:52 some day

2019-01-14 Thread Keith Busch
[+linux-n...@lists.infradead.org]

On Mon, Jan 14, 2019 at 10:03:39AM -0700, Keith Busch wrote:
> [+Ming]
> On Mon, Jan 14, 2019 at 08:31:45AM -0600, Bjorn Helgaas wrote:
> > [+cc Dou, Jens, Thomas, Christoph, linux-pci, LKML]
> > 
> > On Sun, Jan 13, 2019 at 11:24 PM fin4478 fin4478  
> > wrote:
> > >
> > > Hi,
> > >
> > > A regression from the 4.20 kernel: I have the Asgard 256GB nvme drive
> > > and my custom non debug 1000Hz timer kernel 5.0 started to throw a
> > > couple of warning messages at boot. My system works ok:
> 
> I think Ming's patch here fixes this:
> 
>   http://lists.infradead.org/pipermail/linux-nvme/2019-January/021902.html

User fin4478  informs me Ming's patch above fixes
the warnings, but responding to the mailing lists from that email account
are blocked for "security or policy reasons" and requested I forward
the successful test result.

Christoph, Jens: may we queue this up for -rc3?


Re: Please remove warnings from msi.c:1269 and blk-mq-pci.c:52 some day

2019-01-14 Thread Keith Busch
[+Ming]

On Mon, Jan 14, 2019 at 08:31:45AM -0600, Bjorn Helgaas wrote:
> [+cc Dou, Jens, Thomas, Christoph, linux-pci, LKML]
> 
> On Sun, Jan 13, 2019 at 11:24 PM fin4478 fin4478  wrote:
> >
> > Hi,
> >
> > A regression from the 4.20 kernel: I have the Asgard 256GB nvme drive
> > and my custom non debug 1000Hz timer kernel 5.0 started to throw a
> > couple of warning messages at boot. My system works ok:

I think Ming's patch here fixes this:

  http://lists.infradead.org/pipermail/linux-nvme/2019-January/021902.html


Re: Please remove warnings from msi.c:1269 and blk-mq-pci.c:52 some day

2019-01-14 Thread Bjorn Helgaas
[+cc Dou, Jens, Thomas, Christoph, linux-pci, LKML]

On Sun, Jan 13, 2019 at 11:24 PM fin4478 fin4478  wrote:
>
> Hi,
>
> A regression from the 4.20 kernel: I have the Asgard 256GB nvme drive
> and my custom non debug 1000Hz timer kernel 5.0 started to throw a
> couple of warning messages at boot. My system works ok:
>
> [1.849778] nvme nvme0: missing or invalid SUBNQN field.
> [1.858041] nvme nvme0: allocated 64 MiB host memory buffer.
> [1.886727] nvme nvme0: 16/0/0 default/read/poll queues
> [1.886737] WARNING: CPU: 8 PID: 1254 at drivers/pci/msi.c:1269
> pci_irq_get_affinity+0x36/0x80
> [1.886738] Modules linked in: realtek r8169 xhci_pci nvme xhci_hcd
> nvme_core
> [1.886742] CPU: 8 PID: 1254 Comm: kworker/u32:8 Not tainted
> 5.0.0-rc2 #5
> [1.886743] Hardware name: System manufacturer System Product
> Name/PRIME B350M-K, BIOS 4207 12/07/2018
> [1.886747] Workqueue: nvme-reset-wq nvme_reset_work [nvme]
> [1.886749] RIP: 0010:pci_irq_get_affinity+0x36/0x80
> [1.886750] Code: 48 8b 87 88 02 00 00 48 81 c7 88 02 00 00 48 39 c7
> 74 17 85 f6 74 4e 31 d2 eb 04 39 d6 74 46 48 8b 00 83 c2 01 48 39 f8 75
> f1 <0f> 0b 31 c0 c3 83 e2 02 48 c7 c0 a8 dc 29 82 74 29 48 8b 97 88 02
> [1.886751] RSP: 0018:c90001ee7cf8 EFLAGS: 00010246
> [1.886752] RAX: 888214423a88 RBX: 000f RCX:
> 0040
> [1.886752] RDX: 0010 RSI: 0010 RDI:
> 888214423a88
> [1.886753] RBP: 888213a84200 R08: 888216c22400 R09:
> 8882128da480
> [1.886753] R10:  R11: 0001 R12:
> 0001
> [1.886754] R13: 888214423800 R14:  R15:
> 888213005008
> [1.886754] FS:  () GS:888216c0()
> knlGS:
> [1.886755] CS:  0010 DS:  ES:  CR0: 80050033
> [1.886756] CR2: 7f2eada39441 CR3: 00021288 CR4:
> 003406e0
> [1.886756] Call Trace:
> [1.886759]  blk_mq_pci_map_queues+0x32/0xc0
> [1.886762]  nvme_pci_map_queues+0x7b/0xb0 [nvme]
> [1.886764]  blk_mq_alloc_tag_set+0x113/0x2c0
> [1.886767]  nvme_reset_work+0x1210/0x166d [nvme]
> [1.886769]  process_one_work+0x1c9/0x350
> [1.886771]  worker_thread+0x210/0x3c0
> [1.886772]  ? rescuer_thread+0x320/0x320
> [1.886773]  kthread+0x106/0x120
> [1.886774]  ? kthread_create_on_node+0x60/0x60
> [1.886777]  ret_from_fork+0x1f/0x30
> [1.886778] ---[ end trace 1f86c10439edbd73 ]---
> [1.886783] WARNING: CPU: 8 PID: 1254 at block/blk-mq-pci.c:52
> blk_mq_pci_map_queues+0xb9/0xc0
> [1.886784] Modules linked in: realtek r8169 xhci_pci nvme xhci_hcd
> nvme_core
> [1.886786] CPU: 8 PID: 1254 Comm: kworker/u32:8 Tainted: G
> W 5.0.0-rc2 #5
> [1.886786] Hardware name: System manufacturer System Product
> Name/PRIME B350M-K, BIOS 4207 12/07/2018
> [1.886788] Workqueue: nvme-reset-wq nvme_reset_work [nvme]
> [1.886790] RIP: 0010:blk_mq_pci_map_queues+0xb9/0xc0
> [1.886791] Code: d0 c7 04 91 00 00 00 00 48 89 de 89 c7 e8 4f 18 47
> 00 3b 05 5d d9 fb 00 72 e1 5b 31 c0 5d 41 5c 41 5d 41 5e 41 5f c3 31 c0
> c3 <0f> 0b eb c4 90 90 90 41 57 49 89 ff 41 56 41 55 41 54 55 53 48 8b
> [1.886791] RSP: 0018:c90001ee7d00 EFLAGS: 00010216
> [1.886792] RAX:  RBX: 000f RCX:
> 0040
> [1.886792] RDX: 0010 RSI: 0010 RDI:
> 888214423a88
> [1.886793] RBP:  R08: 888216c22400 R09:
> 8882128da480
> [1.886793] R10:  R11: 0001 R12:
> 0001
> [1.886794] R13: 888214423800 R14:  R15:
> 888213005008
> [1.886794] FS:  () GS:888216c0()
> knlGS:
> [1.886795] CS:  0010 DS:  ES:  CR0: 80050033
> [1.886796] CR2: 7f2eada39441 CR3: 00021288 CR4:
> 003406e0
> [1.886796] Call Trace:
> [1.886798]  nvme_pci_map_queues+0x7b/0xb0 [nvme]
> [1.886800]  blk_mq_alloc_tag_set+0x113/0x2c0
> [1.886802]  nvme_reset_work+0x1210/0x166d [nvme]
> [1.886803]  process_one_work+0x1c9/0x350
> [1.886804]  worker_thread+0x210/0x3c0
> [1.886805]  ? rescuer_thread+0x320/0x320
> [1.886806]  kthread+0x106/0x120
> [1.886807]  ? kthread_create_on_node+0x60/0x60
> [1.886808]  ret_from_fork+0x1f/0x30
> [1.886809] ---[ end trace 1f86c10439edbd74 ]---
> [1.893002] nvme nvme0: nvme_report_ns_ids: Identify Descriptors
> failed
> [1.894725] nvme nvme0: nvme_report_ns_ids: Identify Descriptors
> failed
>
> System:
>   Host: ryzenpc Kernel: 5.0.0-rc2 x86_64 bits: 64 Desktop: Xfce 4.12.4
>   Distro: Debian GNU/Linux buster/sid
> Machine:
>   Type: Desktop Mobo: ASUSTeK model: PRIME B350M-K v: Rev X.0x
>   serial:  UEFI [Legacy]: American Megatrends v: 4207
>   date: 12/07/2018
> CPU:
>   6-Core: AMD Ryzen 5 1600 type: MT MCP