Module Name: src Committed By: wiz Date: Wed Sep 21 20:12:12 UTC 2016
Modified Files: src/share/man/man4: nvme.4 Log Message: Various fixes: wording, sections, sort order, articles. To generate a diff of this commit: cvs rdiff -u -r1.5 -r1.6 src/share/man/man4/nvme.4 Please note that diffs are not public domain; they are subject to the copyright notices on the relevant files.
Modified files: Index: src/share/man/man4/nvme.4 diff -u src/share/man/man4/nvme.4:1.5 src/share/man/man4/nvme.4:1.6 --- src/share/man/man4/nvme.4:1.5 Wed Sep 21 20:01:03 2016 +++ src/share/man/man4/nvme.4 Wed Sep 21 20:12:12 2016 @@ -1,4 +1,4 @@ -.\" $NetBSD: nvme.4,v 1.5 2016/09/21 20:01:03 jdolecek Exp $ +.\" $NetBSD: nvme.4,v 1.6 2016/09/21 20:12:12 wiz Exp $ .\" $OpenBSD: nvme.4,v 1.2 2016/04/14 11:53:37 jmc Exp $ .\" .\" Copyright (c) 2016 David Gwynne <d...@openbsd.org> @@ -30,64 +30,65 @@ driver provides support for NVMe, or NVM storage controllers conforming to the Non-Volatile Memory Host Controller Interface specification. Controllers complying to specification version 1.1 and 1.2 are known to work. -Other versions should too for normal operation with exception of some -passthrough commands. +Other versions should work too for normal operation with the exception of some +pass-through commands. .Pp -Driver supports following features: +The driver supports the following features: .Bl -bullet -compact -offset indent .It controller and namespace configuration and management using -.Xr nvmectl 1 +.Xr nvmectl 8 .It highly parallel I/O using per-CPU I/O queues .It PCI MSI/MSI-X attachment, and INTx for legacy systems .El .Pp -On systems supporting MSI/MSI-X, +On systems supporting MSI/MSI-X, the .Nm driver uses per-CPU IO queue pairs for lockless and highly parallelized I/O. Interrupt handlers are scheduled on distinct CPUs. -Driver allocates as many interrupt vectors as available, up to number +The driver allocates as many interrupt vectors as available, up to number of CPUs + 1. MSI supports up to 32 interrupt vectors within the system, MSI-X can have up to 2k. -Each I/O queue pair has separate command circular buffer. +Each I/O queue pair has a separate command circular buffer. +The .Nm -specification allows up to 64k commands per queue, driver currently allocates +specification allows up to 64k commands per queue, the driver currently allocates 1024 items per queue by default. -Command submissions are done always on current CPU, command completion -interrupt is handled on CPU according to I/O queue ID - first I/O queue on CPU0, -second I/O queue on CPU1 etc. +Command submissions are done always on the current CPU, the command completion +interrupt is handled on the CPU corresponding to the I/O queue ID +- first I/O queue on CPU0, second I/O queue on CPU1, etc. Admin queue command completion is not tied to any CPU, it's handled by any CPU. -To keep lock contention to minimum, it's recommended to keep this assignment, -even thought it is possible to reassign the interrupt handlers differently, +To keep lock contention to minimum, it is recommended to keep this assignment, +even though it is possible to reassign the interrupt handlers differently using -.Xr intrctl 1 . -Driver also uses soft interrupts to process command completions, in order to -increase total system I/O capacity and throughput. +.Xr intrctl 8 . +The driver also uses soft interrupts to process command completions, in order to +increase the total system I/O capacity and throughput. .Pp -On systems without MSI, driver uses single HW interrupt handler, for +On systems without MSI, the driver uses a single HW interrupt handler for both admin and standard I/O commands. -Command submissions are done on current CPU, command completion interrupt -is handled on any available CPU. This leads to some lock contention, -especially on command ccbs. +Command submissions are done on the current CPU, the command completion interrupt +is handled on any available CPU. +This leads to some lock contention, especially on command ccbs. Also, command completion handling must be done within the HW interrupt handler. .Sh FILES .Bl -tag -width /dev/nvmeX -compact .It Pa /dev/nvme* nvme device special files used by -.Xr nvmectl 1 . +.Xr nvmectl 8 . .El .Sh SEE ALSO .Xr intro 4 , .Xr ld 4 , .Xr pci 4 , -.Xr nvmectl 1 , +.Xr intrctl 8 , .Xr MAKEDEV 8 , -.Xr intrctl 1 +.Xr nvmectl 8 .Rs .%A NVM Express, Inc. .%T "NVM Express \- scalable, efficient, and industry standard" @@ -130,23 +131,24 @@ At least some .Nm adapter cards are known to require .Tn PCIe -Generation 3 slot. Such cards do not even probe when plugged +Generation 3 slot. +Such cards do not even probe when plugged into older generation slot. .Pp -Driver attaches and works fine also for emulated +The driver also attaches and works fine for emulated .Nm -device under QEMU and +devices under QEMU and .Tn Oracle .Tn VirtualBox . -However, -there seems to be some broken interaction between +However, there seems to be some broken interaction between .Tn VirtualBox 5.1.6 -and the driver, emulated +and the driver, the emulated .Nm -controller responds to commands only first time it's attached, after reboot or -module reload stops responding. Virtual machine must be completely powered off -(or even killed) to fix. +controller responds to commands only the first time it is attached, +after reboot or module reload it stops responding. +The virtual machine must be completely powered off +(or even killed) to fix this. .Pp .Nm kernel module is currently only loadable for kernels configured with