diff options
Diffstat (limited to 'static/netbsd/man4/nvme.4 3.html')
| -rw-r--r-- | static/netbsd/man4/nvme.4 3.html | 117 |
1 files changed, 117 insertions, 0 deletions
diff --git a/static/netbsd/man4/nvme.4 3.html b/static/netbsd/man4/nvme.4 3.html new file mode 100644 index 00000000..879c70a2 --- /dev/null +++ b/static/netbsd/man4/nvme.4 3.html @@ -0,0 +1,117 @@ +<table class="head"> + <tr> + <td class="head-ltitle">NVME(4)</td> + <td class="head-vol">Device Drivers Manual</td> + <td class="head-rtitle">NVME(4)</td> + </tr> +</table> +<div class="manual-text"> +<section class="Sh"> +<h1 class="Sh" id="NAME"><a class="permalink" href="#NAME">NAME</a></h1> +<p class="Pp"><code class="Nm">nvme</code> — + <span class="Nd">Non-Volatile Memory Host Controller Interface</span></p> +</section> +<section class="Sh"> +<h1 class="Sh" id="SYNOPSIS"><a class="permalink" href="#SYNOPSIS">SYNOPSIS</a></h1> +<p class="Pp"><code class="Cd">nvme* at pci? dev ? function ?</code></p> +</section> +<section class="Sh"> +<h1 class="Sh" id="DESCRIPTION"><a class="permalink" href="#DESCRIPTION">DESCRIPTION</a></h1> +<p class="Pp">The <code class="Nm">nvme</code> driver provides support for NVMe, + or NVM Express, storage controllers conforming to the Non-Volatile Memory + Host Controller Interface specification. Controllers complying to + specification version 1.1 and 1.2 are known to work. Other versions should + work too for normal operation with the exception of some pass-through + commands.</p> +<p class="Pp">The driver supports the following features:</p> +<ul class="Bl-bullet Bd-indent Bl-compact"> + <li>controller and namespace configuration and management using + <a class="Xr">nvmectl(8)</a></li> + <li>highly parallel I/O using per-CPU I/O queues</li> + <li>PCI MSI/MSI-X attachment, and INTx for legacy systems</li> +</ul> +<p class="Pp">On systems supporting MSI/MSI-X, the <code class="Nm">nvme</code> + driver uses per-CPU IO queue pairs for lockless and highly parallelized I/O. + Interrupt handlers are scheduled on distinct CPUs. The driver allocates as + many interrupt vectors as available, up to number of CPUs + 1. MSI supports + up to 32 interrupt vectors within the system, MSI-X can have up to 2k. Each + I/O queue pair has a separate command circular buffer. The + <code class="Nm">nvme</code> specification allows up to 64k commands per + queue, the driver currently allocates 1024 entries per queue, or controller + maximum, whatever is smaller. Command submissions are done always on the + current CPU, the command completion interrupt is handled on the CPU + corresponding to the I/O queue ID - first I/O queue on CPU0, second I/O + queue on CPU1, etc. Admin queue command completion is handled by CPU0 by + default. To keep lock contention to minimum, it is recommended to keep this + assignment, even though it is possible to reassign the interrupt handlers + differently using <a class="Xr">intrctl(8)</a>.</p> +<p class="Pp">On systems without MSI, the driver uses a single HW interrupt + handler for both admin and standard I/O commands. Command submissions are + done on the current CPU, the command completion interrupt is handled on CPU0 + by default. This leads to some lock contention, especially on command + ccbs.</p> +<p class="Pp">The driver offloads command completion processing to soft + interrupt, in order to increase the total system I/O capacity and + throughput.</p> +</section> +<section class="Sh"> +<h1 class="Sh" id="FILES"><a class="permalink" href="#FILES">FILES</a></h1> +<dl class="Bl-tag Bl-compact"> + <dt><span class="Pa">/dev/nvme*</span></dt> + <dd>nvme device special files used by <a class="Xr">nvmectl(8)</a>.</dd> +</dl> +</section> +<section class="Sh"> +<h1 class="Sh" id="SEE_ALSO"><a class="permalink" href="#SEE_ALSO">SEE + ALSO</a></h1> +<p class="Pp"><a class="Xr">intro(4)</a>, <a class="Xr">ld(4)</a>, + <a class="Xr">pci(4)</a>, <a class="Xr">intrctl(8)</a>, + <a class="Xr">MAKEDEV(8)</a>, <a class="Xr">nvmectl(8)</a></p> +<p class="Pp"><cite class="Rs"><span class="RsA">NVM Express, Inc.</span>, + <span class="RsT">NVM Express - scalable, efficient, and industry + standard</span>, + <a class="RsU" href="https://nvmexpress.org/">https://nvmexpress.org/</a>, + <span class="RsD">2016-06-12</span>.</cite></p> +<p class="Pp"><cite class="Rs"><span class="RsA">NVM Express, Inc.</span>, + <span class="RsT">NVM Express Revision 1.2.1</span>, + <a class="RsU" href="http://www.nvmexpress.org/wp-content/uploads/NVM_Express_1_2_1_Gold_20160603.pdf">http://www.nvmexpress.org/wp-content/uploads/NVM_Express_1_2_1_Gold_20160603.pdf</a>, + <span class="RsD">2016-06-05</span>.</cite></p> +</section> +<section class="Sh"> +<h1 class="Sh" id="HISTORY"><a class="permalink" href="#HISTORY">HISTORY</a></h1> +<p class="Pp">The <code class="Nm">nvme</code> driver first appeared in + <span class="Ux">OpenBSD 6.0</span> and in <span class="Ux">NetBSD + 8.0</span>.</p> +</section> +<section class="Sh"> +<h1 class="Sh" id="AUTHORS"><a class="permalink" href="#AUTHORS">AUTHORS</a></h1> +<p class="Pp">The <code class="Nm">nvme</code> driver was written by + <span class="An">David Gwynne</span> + <<a class="Mt" href="mailto:dlg@openbsd.org">dlg@openbsd.org</a>> for + <span class="Ux">OpenBSD</span> and ported to <span class="Ux">NetBSD</span> + by <span class="An">NONAKA Kimihiro</span> + <<a class="Mt" href="mailto:nonaka@NetBSD.org">nonaka@NetBSD.org</a>>. + <span class="An">Jaromir Dolecek</span> + <<a class="Mt" href="mailto:jdolecek@NetBSD.org">jdolecek@NetBSD.org</a>> + contributed to making this driver MPSAFE.</p> +</section> +<section class="Sh"> +<h1 class="Sh" id="NOTES"><a class="permalink" href="#NOTES">NOTES</a></h1> +<p class="Pp">At least some Intel <code class="Nm">nvme</code> adapter cards are + known to require PCIe Generation 3 slot. Such cards do not even probe when + plugged into older generation slot.</p> +<p class="Pp">The driver was also tested and confirmed working fine for emulated + <code class="Nm">nvme</code> devices under QEMU 2.8.0, Oracle VirtualBox + 5.1.20, and Parallels Desktop 16.</p> +<p class="Pp">For Parallels Desktop, it's important the virtual machine has the + NVMe disks configured starting from 'NVMe 1', in order for the NVMe + namespaces to be correctly initialized and <a class="Xr">ld(4)</a> devices + to be attached.</p> +</section> +</div> +<table class="foot"> + <tr> + <td class="foot-date">October 5, 2024</td> + <td class="foot-os">NetBSD 10.1</td> + </tr> +</table> |
