On Thu, Dec 08, 2022 at 09:08:12AM +0100, Klaus Jensen wrote: > On Dec 8 08:16, Klaus Jensen wrote: > > On Dec 7 09:49, Guenter Roeck wrote: > > > Hi, > > > > > > On Thu, Jun 16, 2022 at 08:34:07PM +0800, Jinhao Fan wrote: > > > > Implement Doorbel Buffer Config command (Section 5.7 in NVMe Spec 1.3) > > > > and Shadow Doorbel buffer & EventIdx buffer handling logic (Section 7.13 > > > > in NVMe Spec 1.3). For queues created before the Doorbell Buffer Config > > > > command, the nvme_dbbuf_config function tries to associate each existing > > > > SQ and CQ with its Shadow Doorbel buffer and EventIdx buffer address. > > > > Queues created after the Doorbell Buffer Config command will have the > > > > doorbell buffers associated with them when they are initialized. > > > > > > > > In nvme_process_sq and nvme_post_cqe, proactively check for Shadow > > > > Doorbell buffer changes instead of wait for doorbell register changes. > > > > This reduces the number of MMIOs. > > > > > > > > In nvme_process_db(), update the shadow doorbell buffer value with > > > > the doorbell register value if it is the admin queue. This is a hack > > > > since hosts like Linux NVMe driver and SPDK do not use shadow > > > > doorbell buffer for the admin queue. Copying the doorbell register > > > > value to the shadow doorbell buffer allows us to support these hosts > > > > as well as spec-compliant hosts that use shadow doorbell buffer for > > > > the admin queue. > > > > > > > > Signed-off-by: Jinhao Fan <fanjinhao...@ict.ac.cn> > > > > > > I noticed that I can no longer boot Linux kernels from nvme on riscv64 > > > systems. The problem is seen with qemu v7.1 and qemu v7.2-rc4. > > > The log shows: > > > > > > [ 35.904128] nvme nvme0: I/O 642 (I/O Cmd) QID 1 timeout, aborting > > > [ 35.905000] EXT4-fs (nvme0n1): mounting ext2 file system using the > > > ext4 subsystem > > > [ 66.623863] nvme nvme0: I/O 643 (I/O Cmd) QID 1 timeout, aborting > > > [ 97.343989] nvme nvme0: Abort status: 0x0 > > > [ 97.344355] nvme nvme0: Abort status: 0x0 > > > [ 97.344647] nvme nvme0: I/O 7 QID 0 timeout, reset controller > > > [ 97.350568] nvme nvme0: I/O 644 (I/O Cmd) QID 1 timeout, aborting > > > > > > This is with the mainline Linux kernel (v6.1-rc8). > > > > > > Bisect points to this patch. Reverting this patch and a number of > > > associated > > > patches (to fix conflicts) fixes the problem. > > > > > > 06143d8771 Revert "hw/nvme: Implement shadow doorbell buffer support" > > > acb4443e3a Revert "hw/nvme: Use ioeventfd to handle doorbell updates" > > > d5fd309feb Revert "hw/nvme: do not enable ioeventfd by default" > > > 1ca1e6c47c Revert "hw/nvme: unregister the event notifier handler on the > > > main loop" > > > 2d26abd51e Revert "hw/nvme: skip queue processing if notifier is cleared" > > > 99d411b5a5 Revert "hw/nvme: reenable cqe batching" > > > 2293d3ca6c Revert "hw/nvme: Add trace events for shadow doorbell buffer" > > > > > > Qemu command line: > > > > > > qemu-system-riscv64 -M virt -m 512M \ > > > -kernel arch/riscv/boot/Image -snapshot \ > > > -device nvme,serial=foo,drive=d0 \ > > > -drive file=rootfs.ext2,if=none,format=raw,id=d0 \ > > > -bios default \ > > > -append "root=/dev/nvme0n1 console=ttyS0,115200 > > > earlycon=uart8250,mmio,0x10000000,115200" \ > > > -nographic -monitor none > > > > > > Guenter > > > > Hi Guenter, > > > > Thanks for the bisect. > > > > The shadow doorbell is also an obvious candidate for this regression. I > > wonder if this could be a kernel bug, since we are not observing this on > > other architectures. The memory barriers required are super finicky, but > > in QEMU all the operations are associated with full memory barriers. The > > barriers are more fine grained in the kernel though. > > > > I will dig into this together with Keith. > > A cq head doorbell mmio is skipped... And it is not the fault of the > kernel. The kernel is in it's good right to skip the mmio since the cq > eventidx is not properly updated. > > Adding that and it boots properly on riscv. But I'm perplexed as to why > this didnt show up on our regularly tested platforms. > > Gonna try to get this in for 7.2!
I see another problem with sparc64. [ 5.261508] could not locate request for tag 0x0 [ 5.261711] nvme nvme0: invalid id 0 completed on queue 1 That is seen repeatedly until the request times out. I'll test with your patch to see if it resolves this problem as well, and will bisect otherwise. Guenter