summaryrefslogtreecommitdiff
path: root/static/freebsd/man4/nvme.4 3.html
diff options
context:
space:
mode:
Diffstat (limited to 'static/freebsd/man4/nvme.4 3.html')
-rw-r--r--static/freebsd/man4/nvme.4 3.html220
1 files changed, 0 insertions, 220 deletions
diff --git a/static/freebsd/man4/nvme.4 3.html b/static/freebsd/man4/nvme.4 3.html
deleted file mode 100644
index e189e085..00000000
--- a/static/freebsd/man4/nvme.4 3.html
+++ /dev/null
@@ -1,220 +0,0 @@
-<table class="head">
- <tr>
- <td class="head-ltitle">NVME(4)</td>
- <td class="head-vol">Device Drivers Manual</td>
- <td class="head-rtitle">NVME(4)</td>
- </tr>
-</table>
-<div class="manual-text">
-<section class="Sh">
-<h1 class="Sh" id="NAME"><a class="permalink" href="#NAME">NAME</a></h1>
-<p class="Pp"><code class="Nm">nvme</code> &#x2014; <span class="Nd">NVM Express
- core driver</span></p>
-</section>
-<section class="Sh">
-<h1 class="Sh" id="SYNOPSIS"><a class="permalink" href="#SYNOPSIS">SYNOPSIS</a></h1>
-<p class="Pp">To compile this driver into your kernel, place the following line
- in your kernel configuration file:</p>
-<div class="Bd Pp Bd-indent"><code class="Cd">device nvme</code></div>
-<p class="Pp">Or, to load the driver as a module at boot, place the following
- line in <a class="Xr">loader.conf(5)</a>:</p>
-<div class="Bd Pp Bd-indent Li">
-<pre>nvme_load=&quot;YES&quot;</pre>
-</div>
-<p class="Pp">Most users will also want to enable <a class="Xr">nvd(4)</a> or
- <a class="Xr">nda(4)</a> to expose NVM Express namespaces as disk devices
- which can be partitioned. Note that in NVM Express terms, a namespace is
- roughly equivalent to a SCSI LUN.</p>
-</section>
-<section class="Sh">
-<h1 class="Sh" id="DESCRIPTION"><a class="permalink" href="#DESCRIPTION">DESCRIPTION</a></h1>
-<p class="Pp">The <code class="Nm">nvme</code> driver provides support for NVM
- Express (NVMe) controllers, such as:</p>
-<ul class="Bl-bullet">
- <li>Hardware initialization</li>
- <li>Per-CPU IO queue pairs</li>
- <li>API for registering NVMe namespace consumers such as
- <a class="Xr">nvd(4)</a> or <a class="Xr">nda(4)</a></li>
- <li>API for submitting NVM commands to namespaces</li>
- <li>Ioctls for controller and namespace configuration and management</li>
-</ul>
-<p class="Pp">The <code class="Nm">nvme</code> driver creates controller device
- nodes in the format <span class="Pa">/dev/nvmeX</span> and namespace device
- nodes in the format <span class="Pa">/dev/nvmeXnsY</span>. Note that the NVM
- Express specification starts numbering namespaces at 1, not 0, and this
- driver follows that convention.</p>
-</section>
-<section class="Sh">
-<h1 class="Sh" id="CONFIGURATION"><a class="permalink" href="#CONFIGURATION">CONFIGURATION</a></h1>
-<p class="Pp">By default, <code class="Nm">nvme</code> will create an I/O queue
- pair for each CPU, provided enough MSI-X vectors and NVMe queue pairs can be
- allocated. If not enough vectors or queue pairs are available, nvme(4) will
- use a smaller number of queue pairs and assign multiple CPUs per queue
- pair.</p>
-<p class="Pp">To force a single I/O queue pair shared by all CPUs, set the
- following tunable value in <a class="Xr">loader.conf(5)</a>:</p>
-<div class="Bd Pp Bd-indent Li">
-<pre>hw.nvme.per_cpu_io_queues=0</pre>
-</div>
-<p class="Pp">To assign more than one CPU per I/O queue pair, thereby reducing
- the number of MSI-X vectors consumed by the device, set the following
- tunable value in <a class="Xr">loader.conf(5)</a>:</p>
-<div class="Bd Pp Bd-indent Li">
-<pre>hw.nvme.min_cpus_per_ioq=X</pre>
-</div>
-<p class="Pp">To force legacy interrupts for all <code class="Nm">nvme</code>
- driver instances, set the following tunable value in
- <a class="Xr">loader.conf(5)</a>:</p>
-<div class="Bd Pp Bd-indent Li">
-<pre>hw.nvme.force_intx=1</pre>
-</div>
-<p class="Pp">Note that use of INTx implies disabling of per-CPU I/O queue
- pairs.</p>
-<p class="Pp">To control maximum amount of system RAM in bytes to use as Host
- Memory Buffer for capable devices, set the following tunable:</p>
-<div class="Bd Pp Bd-indent Li">
-<pre>hw.nvme.hmb_max</pre>
-</div>
-<p class="Pp">The default value is 5% of physical memory size per device.</p>
-<p class="Pp">To enable Autonomous Power State Transition (APST), set the
- following tunable value in <a class="Xr">loader.conf(5)</a>:</p>
-<div class="Bd Pp Bd-indent Li">
-<pre>hw.nvme.apst_enable=1</pre>
-</div>
-<p class="Pp">The default vendor-provided settings, if any, will be applied. To
- override this, set the following tunable:</p>
-<div class="Bd Pp Bd-indent Li">
-<pre>hw.nvme.apst_data</pre>
-</div>
-<p class="Pp">The string must contain up to 32 encoded integers, e.g.
- &quot;0x6418 0 0 0x3e820&quot;. Each value corresponds to a specific
- available power state starting from the lowest, and defines the target state
- (bits 3..7) to transition to, as well as the idle time in milliseconds (bits
- 8..31) to wait before that transition. Bits 0..2 must be zero.</p>
-<p class="Pp">The <a class="Xr">nvd(4)</a> driver is used to provide a disk
- driver to the system by default. The <a class="Xr">nda(4)</a> driver can
- also be used instead. The <a class="Xr">nvd(4)</a> driver performs better
- with smaller transactions and few TRIM commands. It sends all commands
- directly to the drive immediately. The <a class="Xr">nda(4)</a> driver
- performs better with larger transactions and also collapses TRIM commands
- giving better performance. It can queue commands to the drive; combine
- <code class="Dv">BIO_DELETE</code> commands into a single trip; and use the
- CAM I/O scheduler to bias one type of operation over another. To select the
- <a class="Xr">nda(4)</a> driver, set the following tunable value in
- <a class="Xr">loader.conf(5)</a>:</p>
-<div class="Bd Pp Bd-indent Li">
-<pre>hw.nvme.use_nvd=0</pre>
-</div>
-<p class="Pp">This value may also be set in the kernel config file with</p>
-<div class="Bd Pp Bd-indent Li">
-<pre><code class="Cd">options NVME_USE_NVD=0</code></pre>
-</div>
-<p class="Pp">When there is an error, <code class="Nm">nvme</code> prints only
- the most relevant information about the command by default. To enable
- dumping of all information about the command, set the following tunable
- value in <a class="Xr">loader.conf(5)</a>:</p>
-<div class="Bd Pp Bd-indent Li">
-<pre>hw.nvme.verbose_cmd_dump=1</pre>
-</div>
-<p class="Pp">Prior versions of the driver reset the card twice on boot. This
- proved to be unnecessary and inefficient, so the driver now resets drive
- controller only once. The old behavior may be restored in the kernel config
- file with</p>
-<div class="Bd Pp Bd-indent Li">
-<pre><code class="Cd">options NVME_2X_RESET</code></pre>
-</div>
-</section>
-<section class="Sh">
-<h1 class="Sh" id="SYSCTL_VARIABLES"><a class="permalink" href="#SYSCTL_VARIABLES">SYSCTL
- VARIABLES</a></h1>
-<p class="Pp">The following controller-level sysctls are currently
- implemented:</p>
-<dl class="Bl-tag">
- <dt id="dev.nvme.0.num_cpus_per_ioq"><var class="Va">dev.nvme.0.num_cpus_per_ioq</var></dt>
- <dd>(R) Number of CPUs associated with each I/O queue pair.</dd>
- <dt id="dev.nvme.0.int_coal_time"><var class="Va">dev.nvme.0.int_coal_time</var></dt>
- <dd>(R/W) Interrupt coalescing timer period in microseconds. Set to 0 to
- disable.</dd>
- <dt id="dev.nvme.0.int_coal_threshold"><var class="Va">dev.nvme.0.int_coal_threshold</var></dt>
- <dd>(R/W) Interrupt coalescing threshold in number of command completions. Set
- to 0 to disable.</dd>
-</dl>
-<p class="Pp">The following queue pair-level sysctls are currently implemented.
- Admin queue sysctls take the format of dev.nvme.0.adminq and I/O queue
- sysctls take the format of dev.nvme.0.ioq0.</p>
-<dl class="Bl-tag">
- <dt id="dev.nvme.0.ioq0.num_entries"><var class="Va">dev.nvme.0.ioq0.num_entries</var></dt>
- <dd>(R) Number of entries in this queue pair's command and completion
- queue.</dd>
- <dt id="dev.nvme.0.ioq0.num_tr"><var class="Va">dev.nvme.0.ioq0.num_tr</var></dt>
- <dd>(R) Number of nvme_tracker structures currently allocated for this queue
- pair.</dd>
- <dt id="dev.nvme.0.ioq0.num_prp_list"><var class="Va">dev.nvme.0.ioq0.num_prp_list</var></dt>
- <dd>(R) Number of nvme_prp_list structures currently allocated for this queue
- pair.</dd>
- <dt id="dev.nvme.0.ioq0.sq_head"><var class="Va">dev.nvme.0.ioq0.sq_head</var></dt>
- <dd>(R) Current location of the submission queue head pointer as observed by
- the driver. The head pointer is incremented by the controller as it takes
- commands off of the submission queue.</dd>
- <dt id="dev.nvme.0.ioq0.sq_tail"><var class="Va">dev.nvme.0.ioq0.sq_tail</var></dt>
- <dd>(R) Current location of the submission queue tail pointer as observed by
- the driver. The driver increments the tail pointer after writing a command
- into the submission queue to signal that a new command is ready to be
- processed.</dd>
- <dt id="dev.nvme.0.ioq0.cq_head"><var class="Va">dev.nvme.0.ioq0.cq_head</var></dt>
- <dd>(R) Current location of the completion queue head pointer as observed by
- the driver. The driver increments the head pointer after finishing with a
- completion entry that was posted by the controller.</dd>
- <dt id="dev.nvme.0.ioq0.num_cmds"><var class="Va">dev.nvme.0.ioq0.num_cmds</var></dt>
- <dd>(R) Number of commands that have been submitted on this queue pair.</dd>
- <dt id="dev.nvme.0.ioq0.dump_debug"><var class="Va">dev.nvme.0.ioq0.dump_debug</var></dt>
- <dd>(W) Writing 1 to this sysctl will dump the full contents of the submission
- and completion queues to the console.</dd>
-</dl>
-<p class="Pp">In addition to the typical pci attachment, the
- <code class="Nm">nvme</code> driver supports attaching to a
- <a class="Xr">ahci(4)</a> device. Intel's Rapid Storage Technology (RST)
- hides the nvme device behind the AHCI device due to limitations in Windows.
- However, this effectively hides it from the <span class="Ux">FreeBSD</span>
- kernel. To work around this limitation, <span class="Ux">FreeBSD</span>
- detects that the AHCI device supports RST and when it is enabled. See
- <a class="Xr">ahci(4)</a> for more details.</p>
-</section>
-<section class="Sh">
-<h1 class="Sh" id="DIAGNOSTICS"><a class="permalink" href="#DIAGNOSTICS">DIAGNOSTICS</a></h1>
-<dl class="Bl-diag">
- <dt>nvme%d: System interrupt issues?</dt>
- <dd>The driver found a timed-out transaction had a pending completion record,
- indicating an interrupt had not been delivered. The system is either not
- configuring interrupts properly, or the system drops them under load. This
- message will appear at most once per boot per controller.</dd>
-</dl>
-</section>
-<section class="Sh">
-<h1 class="Sh" id="SEE_ALSO"><a class="permalink" href="#SEE_ALSO">SEE
- ALSO</a></h1>
-<p class="Pp"><a class="Xr">nda(4)</a>, <a class="Xr">nvd(4)</a>,
- <a class="Xr">pci(4)</a>, <a class="Xr">nvmecontrol(8)</a>,
- <a class="Xr">disk(9)</a></p>
-</section>
-<section class="Sh">
-<h1 class="Sh" id="HISTORY"><a class="permalink" href="#HISTORY">HISTORY</a></h1>
-<p class="Pp">The <code class="Nm">nvme</code> driver first appeared in
- <span class="Ux">FreeBSD 9.2</span>.</p>
-</section>
-<section class="Sh">
-<h1 class="Sh" id="AUTHORS"><a class="permalink" href="#AUTHORS">AUTHORS</a></h1>
-<p class="Pp">The <code class="Nm">nvme</code> driver was developed by Intel and
- originally written by <span class="An">Jim Harris</span>
- &lt;<a class="Mt" href="mailto:jimharris@FreeBSD.org">jimharris@FreeBSD.org</a>&gt;,
- with contributions from <span class="An">Joe Golio</span> at EMC.</p>
-<p class="Pp">This man page was written by <span class="An">Jim Harris</span>
- &lt;<a class="Mt" href="mailto:jimharris@FreeBSD.org">jimharris@FreeBSD.org</a>&gt;.</p>
-</section>
-</div>
-<table class="foot">
- <tr>
- <td class="foot-date">June 6, 2020</td>
- <td class="foot-os">FreeBSD 15.0</td>
- </tr>
-</table>