On Thu, Jul 28, 2016 at 3:29 AM, Borja Marcos <bor...@sarenet.es> wrote:

> Hi :)
>
> Still experimenting with NVMe drives and FreeBSD, and I have ran into
> problems, I think.
>
> I´ve got a server with 10 Intel DC P3500 NVMe drives. Right now, running
> 11-BETA2.
>
> I have updated the firmware in the drives to the latest version (8DV10174)
> using the Data Center Tools.
> And I’ve formatted them for 4 KB blocks (LBA format #3)
>
> nvmecontrol identify nvme0ns1
> Size (in LBAs):              488378646 (465M)
> Capacity (in LBAs):          488378646 (465M)
> Utilization (in LBAs):       488378646 (465M)
> Thin Provisioning:           Not Supported
> Number of LBA Formats:       7
> Current LBA Format:          LBA Format #03
> LBA Format #00: Data Size:   512  Metadata Size:     0
> LBA Format #01: Data Size:   512  Metadata Size:     8
> LBA Format #02: Data Size:   512  Metadata Size:    16
> LBA Format #03: Data Size:  4096  Metadata Size:     0
> LBA Format #04: Data Size:  4096  Metadata Size:     8
> LBA Format #05: Data Size:  4096  Metadata Size:    64
> LBA Format #06: Data Size:  4096  Metadata Size:   128
>
>
> ZFS properly detects the 4 KB block size and sets the correct ashift (12).
> But I’ve found these error messages
> generated while I created a pool (zpool create tank raidz2 /dev/nvd[0-8]
> spare /dev/nvd9)
>
> Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:63
> nsid:1
> Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6
> cid:63 cdw0:0
> Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:62
> nsid:1
> Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6
> cid:62 cdw0:0
> Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:61
> nsid:1
> Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6
> cid:61 cdw0:0
> Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:60
> nsid:1
> Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6
> cid:60 cdw0:0
> Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:59
> nsid:1
> Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6
> cid:59 cdw0:0
> Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:58
> nsid:1
> Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6
> cid:58 cdw0:0
> Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:57
> nsid:1
> Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6
> cid:57 cdw0:0
> Jul 28 13:16:11 nvme2 kernel: nvme0: DATASET MANAGEMENT sqid:6 cid:56
> nsid:1
> Jul 28 13:16:11 nvme2 kernel: nvme0: LBA OUT OF RANGE (00/80) sqid:6
> cid:56 cdw0:0
>
> And the same for the rest of the drives [0-9].
>
> Should I worry?
>

Yes, you should worry.

Normally we could use the dump_debug sysctls to help debug this - these
sysctls will dump the NVMe I/O submission and completion queues.  But in
this case the LBA data is in the payload, not the NVMe submission entries,
so dump_debug will not help as much as dumping the NVMe DSM payload
directly.

Could you try the attached patch and send output after recreating your pool?

-Jim

Thanks!
>
>
>
>
> Borja.
>
>
>
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Attachment: delete_debug.patch
Description: Binary data

_______________________________________________
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to