summaryrefslogtreecommitdiff
path: root/static/freebsd/man4/aio.4 3.html
diff options
context:
space:
mode:
Diffstat (limited to 'static/freebsd/man4/aio.4 3.html')
-rw-r--r--static/freebsd/man4/aio.4 3.html174
1 files changed, 174 insertions, 0 deletions
diff --git a/static/freebsd/man4/aio.4 3.html b/static/freebsd/man4/aio.4 3.html
new file mode 100644
index 00000000..0f5f0960
--- /dev/null
+++ b/static/freebsd/man4/aio.4 3.html
@@ -0,0 +1,174 @@
+<table class="head">
+ <tr>
+ <td class="head-ltitle">AIO(4)</td>
+ <td class="head-vol">Device Drivers Manual</td>
+ <td class="head-rtitle">AIO(4)</td>
+ </tr>
+</table>
+<div class="manual-text">
+<section class="Sh">
+<h1 class="Sh" id="NAME"><a class="permalink" href="#NAME">NAME</a></h1>
+<p class="Pp"><code class="Nm">aio</code> &#x2014; <span class="Nd">asynchronous
+ I/O</span></p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="DESCRIPTION"><a class="permalink" href="#DESCRIPTION">DESCRIPTION</a></h1>
+<p class="Pp">The <code class="Nm">aio</code> facility provides system calls for
+ asynchronous I/O. Asynchronous I/O operations are not completed
+ synchronously by the calling thread. Instead, the calling thread invokes one
+ system call to request an asynchronous I/O operation. The status of a
+ completed request is retrieved later via a separate system call.</p>
+<p class="Pp">Asynchronous I/O operations on some file descriptor types may
+ block an AIO daemon indefinitely resulting in process and/or system hangs.
+ Operations on these file descriptor types are considered
+ &#x201C;unsafe&#x201D; and disabled by default. They can be enabled by
+ setting the <var class="Va">vfs.aio.enable_unsafe</var> sysctl node to a
+ non-zero value.</p>
+<p class="Pp">Asynchronous I/O operations on sockets, raw disk devices, and
+ regular files on local filesystems do not block indefinitely and are always
+ enabled.</p>
+<p class="Pp">The <code class="Nm">aio</code> facility uses kernel processes
+ (also known as AIO daemons) to service most asynchronous I/O requests. These
+ processes are grouped into pools containing a variable number of processes.
+ Each pool will add or remove processes to the pool based on load. Pools can
+ be configured by sysctl nodes that define the minimum and maximum number of
+ processes as well as the amount of time an idle process will wait before
+ exiting.</p>
+<p class="Pp">One pool of AIO daemons is used to service asynchronous I/O
+ requests for sockets. These processes are named
+ &#x201C;soaiod&lt;N&gt;&#x201D;. The following sysctl nodes are used with
+ this pool:</p>
+<dl class="Bl-tag">
+ <dt id="kern.ipc.aio.num_procs"><var class="Va">kern.ipc.aio.num_procs</var></dt>
+ <dd>The current number of processes in the pool.</dd>
+ <dt id="kern.ipc.aio.target_procs"><var class="Va">kern.ipc.aio.target_procs</var></dt>
+ <dd>The minimum number of processes that should be present in the pool.</dd>
+ <dt id="kern.ipc.aio.max_procs"><var class="Va">kern.ipc.aio.max_procs</var></dt>
+ <dd>The maximum number of processes permitted in the pool.</dd>
+ <dt id="kern.ipc.aio.lifetime"><var class="Va">kern.ipc.aio.lifetime</var></dt>
+ <dd>The amount of time a process is permitted to idle in clock ticks. If a
+ process is idle for this amount of time and there are more processes in
+ the pool than the target minimum, the process will exit.</dd>
+</dl>
+<p class="Pp">A second pool of AIO daemons is used to service all other
+ asynchronous I/O requests except for I/O requests to raw disks. These
+ processes are named &#x201C;aiod&lt;N&gt;&#x201D;. The following sysctl
+ nodes are used with this pool:</p>
+<dl class="Bl-tag">
+ <dt id="vfs.aio.num_aio_procs"><var class="Va">vfs.aio.num_aio_procs</var></dt>
+ <dd>The current number of processes in the pool.</dd>
+ <dt id="vfs.aio.target_aio_procs"><var class="Va">vfs.aio.target_aio_procs</var></dt>
+ <dd>The minimum number of processes that should be present in the pool.</dd>
+ <dt id="vfs.aio.max_aio_procs"><var class="Va">vfs.aio.max_aio_procs</var></dt>
+ <dd>The maximum number of processes permitted in the pool.</dd>
+ <dt id="vfs.aio.aiod_lifetime"><var class="Va">vfs.aio.aiod_lifetime</var></dt>
+ <dd>The amount of time a process is permitted to idle in clock ticks. If a
+ process is idle for this amount of time and there are more processes in
+ the pool than the target minimum, the process will exit.</dd>
+</dl>
+<p class="Pp">Asynchronous I/O requests for raw disks are queued directly to the
+ disk device layer after temporarily wiring the user pages associated with
+ the request. These requests are not serviced by any of the AIO daemon
+ pools.</p>
+<p class="Pp">Several limits on the number of asynchronous I/O requests are
+ imposed both system-wide and per-process. These limits are configured via
+ the following sysctls:</p>
+<dl class="Bl-tag">
+ <dt id="vfs.aio.max_buf_aio"><var class="Va">vfs.aio.max_buf_aio</var></dt>
+ <dd>The maximum number of queued asynchronous I/O requests for raw disks
+ permitted for a single process. Asynchronous I/O requests that have
+ completed but whose status has not been retrieved via
+ <a class="Xr">aio_return(2)</a> or <a class="Xr">aio_waitcomplete(2)</a>
+ are not counted against this limit.</dd>
+ <dt id="vfs.aio.num_buf_aio"><var class="Va">vfs.aio.num_buf_aio</var></dt>
+ <dd>The number of queued asynchronous I/O requests for raw disks
+ system-wide.</dd>
+ <dt id="vfs.aio.max_aio_queue_per_proc"><var class="Va">vfs.aio.max_aio_queue_per_proc</var></dt>
+ <dd>The maximum number of asynchronous I/O requests for a single process
+ serviced concurrently by the default AIO daemon pool.</dd>
+ <dt id="vfs.aio.max_aio_per_proc"><var class="Va">vfs.aio.max_aio_per_proc</var></dt>
+ <dd>The maximum number of outstanding asynchronous I/O requests permitted for
+ a single process. This includes requests that have not been serviced,
+ requests currently being serviced, and requests that have completed but
+ whose status has not been retrieved via <a class="Xr">aio_return(2)</a> or
+ <a class="Xr">aio_waitcomplete(2)</a>.</dd>
+ <dt id="vfs.aio.num_queue_count"><var class="Va">vfs.aio.num_queue_count</var></dt>
+ <dd>The number of outstanding asynchronous I/O requests system-wide.</dd>
+ <dt id="vfs.aio.max_aio_queue"><var class="Va">vfs.aio.max_aio_queue</var></dt>
+ <dd>The maximum number of outstanding asynchronous I/O requests permitted
+ system-wide.</dd>
+</dl>
+<p class="Pp">Asynchronous I/O control buffers should be zeroed before
+ initializing individual fields. This ensures all fields are initialized.</p>
+<p class="Pp">All asynchronous I/O control buffers contain a
+ <var class="Vt">sigevent</var> structure in the
+ <var class="Va">aio_sigevent</var> field which can be used to request
+ notification when an operation completes.</p>
+<p class="Pp">For <code class="Dv">SIGEV_KEVENT</code> notifications, the
+ <var class="Va">sigevent</var>'s <var class="Va">sigev_notify_kqueue</var>
+ field should contain the descriptor of the kqueue that the event should be
+ attached to, its <var class="Va">sigev_notify_kevent_flags</var> field may
+ contain <code class="Dv">EV_ONESHOT</code>,
+ <code class="Dv">EV_CLEAR</code>, <code class="Dv">and/or</code>
+ <code class="Dv">EV_DISPATCH</code>, <code class="Dv">and its</code>
+ <var class="Va">sigev_notify</var> field should be set to
+ <code class="Dv">SIGEV_KEVENT</code>. The posted kevent will contain:</p>
+<table class="Bl-column">
+ <tr id="Member">
+ <td><a class="permalink" href="#Member"><b class="Sy">Member</b></a></td>
+ <td><a class="permalink" href="#Value"><b class="Sy" id="Value">Value</b></a></td>
+ </tr>
+ <tr id="ident">
+ <td><var class="Va">ident</var></td>
+ <td>asynchronous I/O control buffer pointer</td>
+ </tr>
+ <tr id="filter">
+ <td><var class="Va">filter</var></td>
+ <td><a class="permalink" href="#EVFILT_AIO"><code class="Dv" id="EVFILT_AIO">EVFILT_AIO</code></a></td>
+ </tr>
+ <tr id="flags">
+ <td><var class="Va">flags</var></td>
+ <td><a class="permalink" href="#EV_EOF"><code class="Dv" id="EV_EOF">EV_EOF</code></a></td>
+ </tr>
+ <tr id="udata">
+ <td><var class="Va">udata</var></td>
+ <td>value stored in <var class="Va">aio_sigevent.sigev_value</var></td>
+ </tr>
+</table>
+<p class="Pp">For <code class="Dv">SIGEV_SIGNO</code> and
+ <code class="Dv">SIGEV_THREAD_ID</code> notifications, the information for
+ the queued signal will include <code class="Dv">SI_ASYNCIO</code> in the
+ <var class="Va">si_code</var> field and the value stored in
+ <var class="Va">sigevent.sigev_value</var> in the
+ <var class="Va">si_value</var> field.</p>
+<p class="Pp">For <code class="Dv">SIGEV_THREAD</code> notifications, the value
+ stored in <var class="Va">aio_sigevent.sigev_value</var> is passed to the
+ <var class="Va">aio_sigevent.sigev_notify_function</var> as described in
+ <a class="Xr">sigevent(3)</a>.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="SEE_ALSO"><a class="permalink" href="#SEE_ALSO">SEE
+ ALSO</a></h1>
+<p class="Pp"><a class="Xr">aio_cancel(2)</a>, <a class="Xr">aio_error(2)</a>,
+ <a class="Xr">aio_read(2)</a>, <a class="Xr">aio_readv(2)</a>,
+ <a class="Xr">aio_return(2)</a>, <a class="Xr">aio_suspend(2)</a>,
+ <a class="Xr">aio_waitcomplete(2)</a>, <a class="Xr">aio_write(2)</a>,
+ <a class="Xr">aio_writev(2)</a>, <a class="Xr">lio_listio(2)</a>,
+ <a class="Xr">sigevent(3)</a>, <a class="Xr">sysctl(8)</a></p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="HISTORY"><a class="permalink" href="#HISTORY">HISTORY</a></h1>
+<p class="Pp">The <code class="Nm">aio</code> facility appeared as a kernel
+ option in <span class="Ux">FreeBSD 3.0</span>. The
+ <code class="Nm">aio</code> kernel module appeared in
+ <span class="Ux">FreeBSD 5.0</span>. The <code class="Nm">aio</code>
+ facility was integrated into all kernels in <span class="Ux">FreeBSD
+ 11.0</span>.</p>
+</section>
+</div>
+<table class="foot">
+ <tr>
+ <td class="foot-date">January 2, 2021</td>
+ <td class="foot-os">FreeBSD 15.0</td>
+ </tr>
+</table>