Re: NVME aborting outstanding i/o and controller resets

2019-04-15 Thread Patrick M. Hausen
Some updates:

https://www.ixsystems.com/community/threads/nvme-problems-are-there-nightlies-based-on-12-stable-already.75685
https://jira.ixsystems.com/browse/NAS-101427

Kind regards,
Patrick
-- 
punkt.de GmbH   Internet - Dienstleistungen - Beratung
Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100
76133 Karlsruhe i...@punkt.de   http://punkt.de
AG Mannheim 108285  Gf: Juergen Egeling

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: NVME aborting outstanding i/o and controller resets

2019-04-15 Thread Patrick M. Hausen
Hi!

> Am 15.04.2019 um 10:51 schrieb Patrick M. Hausen :
> Now, RELENG_12 kernel, 11.2-RELEASE userland:
> 
> root@hurz:/var/tmp # uname -a
> FreeBSD hurz 12.0-STABLE FreeBSD 12.0-STABLE r346220 GENERIC  amd64
> root@hurz:/var/tmp #  dd if=/dev/urandom of=hurz bs=10m
> 
> Result:
> 
> no problems, not with two of these jobs running in parallel, not with a zpool 
> scrub at the same time …

After they ran for half an hour I find these in /var/log/messages:

Apr 15 11:03:54 hurz kernel: nvme2: Missing interrupt
Apr 15 11:07:07 hurz kernel: nvme3: Missing interrupt
Apr 15 11:09:47 hurz kernel: nvme4: Missing interrupt

They are the only occurrences. The system does not seem to hang or otherwise
misbehave ...

Kind regards
Patrick
-- 
punkt.de GmbH   Internet - Dienstleistungen - Beratung
Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100
76133 Karlsruhe i...@punkt.de   http://punkt.de
AG Mannheim 108285  Gf: Juergen Egeling

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: NVME aborting outstanding i/o and controller resets

2019-04-15 Thread Patrick M. Hausen
> Am 15.04.2019 um 08:46 schrieb Patrick M. Hausen :
> So I’ll test RELENG_12 next. If that works, I can probably craft
> a FreeNAS 11.2 installation with a 12 kernel. I would be hesitating to run
> HEAD in production, though.

root@hurz:/var/tmp # uname -a
FreeBSD hurz 11.2-RELEASE FreeBSD 11.2-RELEASE #0 r335510: Fri Jun 22 04:32:14 
UTC 2018 r...@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64
root@hurz:/var/tmp # dd if=/dev/urandom of=hurz bs=10m

Result:

Apr 15 09:56:07 hurz kernel: nvme4: resetting controller
Apr 15 09:56:07 hurz kernel: nvme3: resetting controller
Apr 15 09:56:07 hurz kernel: nvme4: aborting outstanding i/o
Apr 15 09:56:07 hurz kernel: nvme4: WRITE sqid:5 cid:126 nsid:1 lba:188361216 
len:208
Apr 15 09:56:07 hurz kernel: nvme4: ABORTED - BY REQUEST (00/07) sqid:5 cid:126 
cdw0:0
Apr 15 09:56:07 hurz kernel: nvme4: aborting outstanding i/o
Apr 15 09:56:07 hurz kernel: nvme4: WRITE sqid:5 cid:127 nsid:1 lba:188368784 
len:64
Apr 15 09:56:07 hurz kernel: nvme4: ABORTED - BY REQUEST (00/07) sqid:5 cid:127 
cdw0:0
Apr 15 09:56:07 hurz kernel: nvme4: aborting outstanding i/o
Apr 15 09:56:07 hurz kernel: nvme4: WRITE sqid:5 cid:125 nsid:1 lba:188371408 
len:48
Apr 15 09:56:07 hurz kernel: nvme4: ABORTED - BY REQUEST (00/07) sqid:5 cid:125 
cdw0:0
Apr 15 09:56:07 hurz kernel: nvme4: aborting outstanding i/o
Apr 15 09:56:07 hurz kernel: nvme4: WRITE sqid:5 cid:124 nsid:1 lba:188371456 
len:16
Apr 15 09:56:07 hurz kernel: nvme4: ABORTED - BY REQUEST (00/07) sqid:5 cid:124 
cdw0:0
[…]


Now, RELENG_12 kernel, 11.2-RELEASE userland:

root@hurz:/var/tmp # uname -a
FreeBSD hurz 12.0-STABLE FreeBSD 12.0-STABLE r346220 GENERIC  amd64
root@hurz:/var/tmp #  dd if=/dev/urandom of=hurz bs=10m

Result:

no problems, not with two of these jobs running in parallel, not with a zpool 
scrub at the same time …


I uploaded a complete dmesg of the system running RELENG_12:
https://cloud.hausen.com/s/5dRMsewCtDFHRYA

Is there anything else I should send? pciconf, nvmecontrol …?

Kind regards
Patrick
-- 
punkt.de GmbH   Internet - Dienstleistungen - Beratung
Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100
76133 Karlsruhe i...@punkt.de   http://punkt.de
AG Mannheim 108285  Gf: Juergen Egeling

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: NVME aborting outstanding i/o and controller resets

2019-04-14 Thread Patrick M. Hausen
Hi!

> Am 14.04.2019 um 23:33 schrieb Patrick M. Hausen :
> Since the system runs well with RELENG_11 and only 4 drives
> and there is this question about the cabling and shared resources
> I will try to set up a system with 5 drives, each of them *without*
> another one in a „pair“ sharing the same MB connector.

So much for that theory: with 5 drives arranged in that way I get the
errors even during installation.

https://cloud.hausen.com/s/2myrX2Jr3fgLWGj
https://cloud.hausen.com/s/yryckgp56sH2CRe

So I’ll test RELENG_12 next. If that works, I can probably craft
a FreeNAS 11.2 installation with a 12 kernel. I would be hesitating to run
HEAD in production, though.

Kind regards,
Patrick
-- 
punkt.de GmbH   Internet - Dienstleistungen - Beratung
Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100
76133 Karlsruhe i...@punkt.de   http://punkt.de
AG Mannheim 108285  Gf: Juergen Egeling

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: NVME aborting outstanding i/o and controller resets

2019-04-14 Thread Patrick M. Hausen
Alright ...

> Am 13.04.2019 um 02:37 schrieb Warner Losh :
> > There's been some minor improvements in -current here. Any chance you could 
> > experimentally try that with this test? You won't get as many I/O abort 
> > errors (since we don't print those), and we have a few more workarounds for 
> > the reset path (though honestly, it's still kinda stinky).
> 
> HEAD or RELENG_12, too?
> 
> HEAD is preferred, but any recent snapshot will do.

I could not reproduce the problem for a couple of hours with
an otherwise identical system but only 4 of these Intel drives.

Now the same test system with 6 drives just as our FreeNAS boxes
- instantly reproducible.

I’ll upgrade to HEAD and see if that changes anything.

Kind regards
Patrick
-- 
punkt.de GmbH   Internet - Dienstleistungen - Beratung
Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100
76133 Karlsruhe i...@punkt.de   http://punkt.de
AG Mannheim 108285  Gf: Juergen Egeling

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: NVME aborting outstanding i/o and controller resets

2019-04-12 Thread Warner Losh
On Fri, Apr 12, 2019, 1:22 PM Patrick M. Hausen  wrote:

> Hi Warner,
>
> thanks for taking the time again …
>
> > OK. This means that whatever I/O workload we've done has caused the NVME
> card to stop responding for 30s, so we reset it.
>
> I figured as much ;-)
>
> > So it's an intel card.
>
> Yes - I already added this info several times. 6 of them, 2.5“ NVME „disk
> drives“.
>

Yea, it was more of a knowing sigh...

> OK. That suggests Intel has a problem with their firmware.
>
> I came across this one:
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=211713
>
> Is it more probable that Intel has got buggy firmware here than that
> „we“ are missing interrupts?
>

More probable bad firmware. One of the things I think that is in HEAD is a
mitigation for this that looks for completed IO on timeout before doing a
reset.

The mainboard is the Supermicro H11SSW-NT. Two NVME drive bays share
> a connector on the mainboard:
>
> NVMe Ports ( NVMe 0~7, 10, 11, 14, 15)
>
> The H11SSW-iN/NT has tweleve (12) NVMe ports (2 ports per 1 Slim
> SAS connector) on the motherboard.
> These ports provide high-speed, low-latency PCI-E 3.0 x4
> connections directly from the CPU to NVMe Solid
> State (SSD) drives. This greatly increases SSD data- throughput
> performance and significantly reduces PCI-E
> latency by simplifying driver/software requirements resulting from
> direct PCI-E interface from the CPU to the NVMe SSD drives.
>
> Is this purely mechanical or do two drives share PCI-E resources? Which
> would explain
> why the problems always come in pairs (nvme6 and nvme7, for example).
>

I'm unfamiliar with this setup, but coming in pairs increases the missed
interrupt theory in my mind. Firmware issues usually don't come in pairs.

This afternoon I set up a system with 4 drives and I was not able to
> reproduce the problem.
> (We just got 3 more machines which happened to have 4 drives each and no
> M.2 directly
> on the mainboard).
> I will change the config to 6 drives like with the two FreeNAS systems in
> our data center.
>
> > [… nda(4) ...]
> > I doubt that would have any effect. They both throw as much I/O onto the
> card as possible in the default config.
>
> I found out - yes, just the same.
>

NDA drives with an iosched kernel will be able to rate limit, which may be
useful as a diagnostic tool...

> There's been some minor improvements in -current here. Any chance you
> could experimentally try that with this test? You won't get as many I/O
> abort errors (since we don't print those), and we have a few more
> workarounds for the reset path (though honestly, it's still kinda stinky).
>
> HEAD or RELENG_12, too?
>

HEAD is preferred, but any recent snapshot will do.

Warner

Kind regards,
> Patrick
> --
> punkt.de GmbH   Internet - Dienstleistungen - Beratung
> Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100
> 76133 Karlsruhe i...@punkt.de   http://punkt.de
> AG Mannheim 108285  Gf: Juergen Egeling
>
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: NVME aborting outstanding i/o and controller resets

2019-04-12 Thread Patrick M. Hausen
Hi Warner,

thanks for taking the time again …

> OK. This means that whatever I/O workload we've done has caused the NVME card 
> to stop responding for 30s, so we reset it.

I figured as much ;-)

> So it's an intel card.

Yes - I already added this info several times. 6 of them, 2.5“ NVME „disk 
drives“.

> OK. That suggests Intel has a problem with their firmware.

I came across this one:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=211713

Is it more probable that Intel has got buggy firmware here than that
„we“ are missing interrupts?

The mainboard is the Supermicro H11SSW-NT. Two NVME drive bays share
a connector on the mainboard:

NVMe Ports ( NVMe 0~7, 10, 11, 14, 15)

The H11SSW-iN/NT has tweleve (12) NVMe ports (2 ports per 1 Slim SAS 
connector) on the motherboard.
These ports provide high-speed, low-latency PCI-E 3.0 x4 connections 
directly from the CPU to NVMe Solid
State (SSD) drives. This greatly increases SSD data- throughput 
performance and significantly reduces PCI-E
latency by simplifying driver/software requirements resulting from 
direct PCI-E interface from the CPU to the NVMe SSD drives.

Is this purely mechanical or do two drives share PCI-E resources? Which would 
explain
why the problems always come in pairs (nvme6 and nvme7, for example).

This afternoon I set up a system with 4 drives and I was not able to reproduce 
the problem.
(We just got 3 more machines which happened to have 4 drives each and no M.2 
directly
on the mainboard).
I will change the config to 6 drives like with the two FreeNAS systems in our 
data center.

> [… nda(4) ...]
> I doubt that would have any effect. They both throw as much I/O onto the card 
> as possible in the default config.

I found out - yes, just the same.

> There's been some minor improvements in -current here. Any chance you could 
> experimentally try that with this test? You won't get as many I/O abort 
> errors (since we don't print those), and we have a few more workarounds for 
> the reset path (though honestly, it's still kinda stinky).

HEAD or RELENG_12, too?

Kind regards,
Patrick
-- 
punkt.de GmbH   Internet - Dienstleistungen - Beratung
Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100
76133 Karlsruhe i...@punkt.de   http://punkt.de
AG Mannheim 108285  Gf: Juergen Egeling

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: NVME aborting outstanding i/o and controller resets

2019-04-12 Thread Warner Losh
On Fri, Apr 12, 2019 at 6:00 AM Patrick M. Hausen  wrote:

> Hi all,
>
> my problems seem not to be TRIM related after all … and I can now
> quickly reproduce it.
>
> =
> root@freenas01[~]# sysctl vfs.zfs.trim.enabled
> vfs.zfs.trim.enabled: 0
> =
> root@freenas01[~]# cd /mnt/zfs
> root@freenas01[/mnt/zfs]# dd if=/dev/urandom of=hurz bs=10m
> ^C — system freezes temporarily
>

This does one I/O at a time to the filesystem, which then repackages the
I/Os such that multiple I/Os are going on with the NVMe card.


> =
> Apr 12 13:42:16 freenas01 nvme6: resetting controller
>

OK. This means that whatever I/O workload we've done has caused the NVME
card to stop responding for 30s, so we reset it.


> Apr 12 13:42:16 freenas01 nvme6: aborting outstanding i/o
> Apr 12 13:42:16 freenas01 nvme6: WRITE sqid:1 cid:117 nsid:1 lba:981825104
> len:176
> Apr 12 13:42:16 freenas01 nvme6: ABORTED - BY REQUEST (00/07) sqid:1
> cid:117 cdw0:0
>

But only one request was in flight...  And we keep doing it over and over
again, but to different LBAs suggesting that we're stuttering: a few go
through and then things wedge again. This happens every 30ish seconds.


> Apr 12 13:42:49 freenas01 nvme6: resetting controller
> Apr 12 13:42:50 freenas01 nvme6: aborting outstanding i/o
> Apr 12 13:42:50 freenas01 nvme6: WRITE sqid:1 cid:127 nsid:1 lba:984107936
> len:96
> Apr 12 13:42:50 freenas01 nvme6: ABORTED - BY REQUEST (00/07) sqid:1
> cid:127 cdw0:0
> Apr 12 13:43:35 freenas01 nvme6: resetting controller
> Apr 12 13:43:35 freenas01 nvme6: aborting outstanding i/o
> Apr 12 13:43:35 freenas01 nvme6: WRITE sqid:1 cid:112 nsid:1 lba:976172032
> len:176
> Apr 12 13:43:35 freenas01 nvme6: ABORTED - BY REQUEST (00/07) sqid:1
> cid:112 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: resetting controller
>

And then this one goes wonkies.


> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:111 nsid:1 lba:976199176
> len:248
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:111 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:102 nsid:1 lba:976199432
> len:248
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:102 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:112 nsid:1 lba:976199680
> len:8
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:112 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:105 nsid:1 lba:976199752
> len:64
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:105 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:122 nsid:1 lba:976199816
> len:64
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:122 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:103 nsid:1 lba:976199688
> len:64
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:103 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:126 nsid:1 lba:976200136
> len:56
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:126 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:106 nsid:1 lba:976200192
> len:8
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:106 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:107 nsid:1 lba:976200200
> len:64
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:107 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:127 nsid:1 lba:976200264
> len:64
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:127 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:113 nsid:1 lba:976200328
> len:120
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:113 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:108 nsid:1 lba:976200448
> len:72
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:108 cdw0:0
> Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
> Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:116 nsid:1 lba:976200520
> len:64
> Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1
> cid:116 cdw0:0
> =
> root@freenas01[~]# nvmecontrol identify nvme6
> Controller Capabilities/Features
> 

Re: NVME aborting outstanding i/o and controller resets

2019-04-12 Thread Patrick M. Hausen
Hi all,

my problems seem not to be TRIM related after all … and I can now
quickly reproduce it.

=
root@freenas01[~]# sysctl vfs.zfs.trim.enabled
vfs.zfs.trim.enabled: 0
=
root@freenas01[~]# cd /mnt/zfs
root@freenas01[/mnt/zfs]# dd if=/dev/urandom of=hurz bs=10m
^C — system freezes temporarily
=
Apr 12 13:42:16 freenas01 nvme6: resetting controller
Apr 12 13:42:16 freenas01 nvme6: aborting outstanding i/o
Apr 12 13:42:16 freenas01 nvme6: WRITE sqid:1 cid:117 nsid:1 lba:981825104 
len:176
Apr 12 13:42:16 freenas01 nvme6: ABORTED - BY REQUEST (00/07) sqid:1 cid:117 
cdw0:0
Apr 12 13:42:49 freenas01 nvme6: resetting controller
Apr 12 13:42:50 freenas01 nvme6: aborting outstanding i/o
Apr 12 13:42:50 freenas01 nvme6: WRITE sqid:1 cid:127 nsid:1 lba:984107936 
len:96
Apr 12 13:42:50 freenas01 nvme6: ABORTED - BY REQUEST (00/07) sqid:1 cid:127 
cdw0:0
Apr 12 13:43:35 freenas01 nvme6: resetting controller
Apr 12 13:43:35 freenas01 nvme6: aborting outstanding i/o
Apr 12 13:43:35 freenas01 nvme6: WRITE sqid:1 cid:112 nsid:1 lba:976172032 
len:176
Apr 12 13:43:35 freenas01 nvme6: ABORTED - BY REQUEST (00/07) sqid:1 cid:112 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: resetting controller
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:111 nsid:1 lba:976199176 
len:248
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:111 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:102 nsid:1 lba:976199432 
len:248
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:102 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:112 nsid:1 lba:976199680 len:8
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:112 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:105 nsid:1 lba:976199752 
len:64
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:105 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:122 nsid:1 lba:976199816 
len:64
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:122 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:103 nsid:1 lba:976199688 
len:64
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:103 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:126 nsid:1 lba:976200136 
len:56
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:126 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:106 nsid:1 lba:976200192 len:8
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:106 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:107 nsid:1 lba:976200200 
len:64
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:107 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:127 nsid:1 lba:976200264 
len:64
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:127 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:113 nsid:1 lba:976200328 
len:120
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:113 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:108 nsid:1 lba:976200448 
len:72
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:108 
cdw0:0
Apr 12 13:44:06 freenas01 nvme7: aborting outstanding i/o
Apr 12 13:44:06 freenas01 nvme7: WRITE sqid:1 cid:116 nsid:1 lba:976200520 
len:64
Apr 12 13:44:06 freenas01 nvme7: ABORTED - BY REQUEST (00/07) sqid:1 cid:116 
cdw0:0
=
root@freenas01[~]# nvmecontrol identify nvme6
Controller Capabilities/Features

Vendor ID:  8086
Subsystem Vendor ID:8086
Serial Number:  BTLJ90230EC61P0FGN
Model Number:   INTEL SSDPE2KX010T8
Firmware Version:   VDV10131
Recommended Arb Burst:  0
IEEE OUI Identifier:e4 d2 5c
Multi-Interface Cap:00
Max Data Transfer Size: 131072
Controller ID:  0x00

Admin Command Set Attributes

Security Send/Receive:   Not Supported
Format NVM:  Supported
Firmware Activate/Download:  Supported
Namespace Managment: Supported
Abort Command Limit: 4
Async Event Request Limit:   4
Number of Firmware Slots:1
Firmware Slot 1 Read-Only:   No
Per-Namespace SMART Log: N