diff options
| author | Jacob McDonnell <jacob@jacobmcdonnell.com> | 2026-04-25 19:55:15 -0400 |
|---|---|---|
| committer | Jacob McDonnell <jacob@jacobmcdonnell.com> | 2026-04-25 19:55:15 -0400 |
| commit | 253e67c8b3a72b3a4757fdbc5845297628db0a4a (patch) | |
| tree | adf53b66087aa30dfbf8bf391a1dadb044c3bf4d /static/netbsd/man9/locking.9 3.html | |
| parent | a9157ce950dfe2fc30795d43b9d79b9d1bffc48b (diff) | |
docs: Added All NetBSD Manuals
Diffstat (limited to 'static/netbsd/man9/locking.9 3.html')
| -rw-r--r-- | static/netbsd/man9/locking.9 3.html | 333 |
1 files changed, 333 insertions, 0 deletions
diff --git a/static/netbsd/man9/locking.9 3.html b/static/netbsd/man9/locking.9 3.html new file mode 100644 index 00000000..9704a7c1 --- /dev/null +++ b/static/netbsd/man9/locking.9 3.html @@ -0,0 +1,333 @@ +<table class="head"> + <tr> + <td class="head-ltitle">LOCKING(9)</td> + <td class="head-vol">Kernel Developer's Manual</td> + <td class="head-rtitle">LOCKING(9)</td> + </tr> +</table> +<div class="manual-text"> +<section class="Sh"> +<h1 class="Sh" id="NAME"><a class="permalink" href="#NAME">NAME</a></h1> +<p class="Pp"><code class="Nm">locking</code> — + <span class="Nd">introduction to kernel synchronization and interrupt + control</span></p> +</section> +<section class="Sh"> +<h1 class="Sh" id="DESCRIPTION"><a class="permalink" href="#DESCRIPTION">DESCRIPTION</a></h1> +<p class="Pp">The <span class="Ux">NetBSD</span> kernel provides several + synchronization and interrupt control primitives. This man page aims to give + an overview of these interfaces and their proper application. Also included + are basic kernel thread control primitives and a rough overview of the + <span class="Ux">NetBSD</span> kernel design.</p> +</section> +<section class="Sh"> +<h1 class="Sh" id="KERNEL_OVERVIEW"><a class="permalink" href="#KERNEL_OVERVIEW">KERNEL + OVERVIEW</a></h1> +<p class="Pp">The aim of synchronization, threads and interrupt control in the + kernel is:</p> +<ul class="Bl-bullet Bd-indent"> + <li>To control concurrent access to shared resources (critical sections).</li> + <li>Spawn tasks from an interrupt in the thread context.</li> + <li>Mask interrupts from threads.</li> + <li>Scale on multiple CPU system.</li> +</ul> +<p class="Pp">There are three types of contexts in the + <span class="Ux">NetBSD</span> kernel:</p> +<ul class="Bl-bullet Bd-indent"> + <li id="Thread"><a class="permalink" href="#Thread"><i class="Em">Thread + context</i></a> - running processes (represented by + <code class="Dv">struct proc</code>) and light-weight processes + (represented by <code class="Dv">struct lwp</code>, also known as kernel + threads). Code in this context can sleep, block resources and own + address-space.</li> + <li id="Software"><a class="permalink" href="#Software"><i class="Em">Software + interrupt context</i></a> - limited by thread context. Code in this + context must be processed shortly. These interrupts don't own any address + space context. Software interrupts are a way of deferring hardware + interrupts to do more expensive processing at a lower interrupt + priority.</li> + <li id="Hard"><a class="permalink" href="#Hard"><i class="Em">Hard interrupt + context</i></a> - Code in this context must be processed as quickly as + possible. It is forbidden for a piece of code to sleep or access + long-awaited resources here.</li> +</ul> +<p class="Pp">The main differences between processes and kernel threads are:</p> +<ul class="Bl-bullet Bd-indent"> + <li>A single process can own multiple kernel threads (LWPs).</li> + <li>A process owns address space context to map userland address space.</li> + <li>Processes are designed for userland executables and kernel threads for + in-kernel tasks. The only process running in the kernel-space is + <code class="Dv">proc0</code> (called swapper).</li> +</ul> +</section> +<section class="Sh"> +<h1 class="Sh" id="INTERFACES"><a class="permalink" href="#INTERFACES">INTERFACES</a></h1> +<section class="Ss"> +<h2 class="Ss" id="Atomic_memory_operations"><a class="permalink" href="#Atomic_memory_operations">Atomic + memory operations</a></h2> +<p class="Pp">The <code class="Nm">atomic_ops</code> family of functions provide + atomic memory operations. There are 7 classes of atomic memory operations + available: addition, logical “and”, compare-and-swap, + decrement, increment, logical “or”, swap.</p> +<p class="Pp">See <a class="Xr">atomic_ops(3)</a>.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Condition_variables"><a class="permalink" href="#Condition_variables">Condition + variables</a></h2> +<p class="Pp">Condition variables (CVs) are used in the kernel to synchronize + access to resources that are limited (for example, memory) and to wait for + pending I/O operations to complete.</p> +<p class="Pp">See <a class="Xr">condvar(9)</a>.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Memory_access_barrier_operations"><a class="permalink" href="#Memory_access_barrier_operations">Memory + access barrier operations</a></h2> +<p class="Pp">The <code class="Nm">membar_ops</code> family of functions provide + memory access barrier operations necessary for synchronization in + multiprocessor execution environments that have relaxed load and store + order.</p> +<p class="Pp">See <a class="Xr">membar_ops(3)</a>.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Memory_barriers"><a class="permalink" href="#Memory_barriers">Memory + barriers</a></h2> +<p class="Pp">The memory barriers can be used to control the order in which + memory accesses occur, and thus the order in which those accesses become + visible to other processors. They can be used to implement + “lockless” access to data structures where the necessary + barrier conditions are well understood.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Mutual_exclusion_primitives"><a class="permalink" href="#Mutual_exclusion_primitives">Mutual + exclusion primitives</a></h2> +<p class="Pp">Thread-base adaptive mutexes. These are lightweight, exclusive + locks that use threads as the focus of synchronization activity. Adaptive + mutexes typically behave like spinlocks, but under specific conditions an + attempt to acquire an already held adaptive mutex may cause the acquiring + thread to sleep. Sleep activity occurs rarely. Busy-waiting is typically + more efficient because mutex hold times are most often short. In contrast to + pure spinlocks, a thread holding an adaptive mutex may be pre-empted in the + kernel, which can allow for reduced latency where soft real-time application + are in use on the system.</p> +<p class="Pp">See <a class="Xr">mutex(9)</a>.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Restartable_atomic_sequences"><a class="permalink" href="#Restartable_atomic_sequences">Restartable + atomic sequences</a></h2> +<p class="Pp">Restartable atomic sequences are user code only sequences which + are guaranteed to execute without preemption. This property is assured by + checking the set of restartable atomic sequences registered for a process + during <a class="Xr">cpu_switchto(9)</a>. If a process is found to have been + preempted during a restartable sequence, then its execution is rolled-back + to the start of the sequence by resetting its program counter which is saved + in its process control block (PCB).</p> +<p class="Pp">See <a class="Xr">ras(9)</a>.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Reader_/_writer_lock_primitives"><a class="permalink" href="#Reader_/_writer_lock_primitives">Reader + / writer lock primitives</a></h2> +<p class="Pp">Reader / writer locks (RW locks) are used in the kernel to + synchronize access to an object among LWPs (lightweight processes) and soft + interrupt handlers. In addition to the capabilities provided by mutexes, RW + locks distinguish between read (shared) and write (exclusive) access.</p> +<p class="Pp">See <a class="Xr">rwlock(9)</a>.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Functions_to_modify_system_interrupt_priority_level"><a class="permalink" href="#Functions_to_modify_system_interrupt_priority_level">Functions + to modify system interrupt priority level</a></h2> +<p class="Pp">These functions raise and lower the interrupt priority level. They + are used by kernel code to block interrupts in critical sections, in order + to protect data structures.</p> +<p class="Pp">See <a class="Xr">spl(9)</a>.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Machine-independent_software_interrupt_framework"><a class="permalink" href="#Machine-independent_software_interrupt_framework">Machine-independent + software interrupt framework</a></h2> +<p class="Pp">The software interrupt framework is designed to provide a generic + software interrupt mechanism which can be used any time a low-priority + callback is required. It allows dynamic registration of software interrupts + for loadable drivers, protocol stacks, software interrupt prioritization, + software interrupt fair queuing and allows machine-dependent optimizations + to reduce cost.</p> +<p class="Pp">See <a class="Xr">softint(9)</a>.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Functions_to_raise_the_system_priority_level"><a class="permalink" href="#Functions_to_raise_the_system_priority_level">Functions + to raise the system priority level</a></h2> +<p class="Pp">The <code class="Nm">splraiseipl</code> function raises the system + priority level to the level specified by <code class="Dv">icookie</code>, + which should be a value returned by <a class="Xr">makeiplcookie(9)</a>. In + general, device drivers should not make use of this interface. To ensure + correct synchronization, device drivers should use the + <a class="Xr">condvar(9)</a>, <a class="Xr">mutex(9)</a>, and + <a class="Xr">rwlock(9)</a> interfaces.</p> +<p class="Pp">See <a class="Xr">splraiseipl(9)</a>.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Passive_serialization_mechanism"><a class="permalink" href="#Passive_serialization_mechanism">Passive + serialization mechanism</a></h2> +<p class="Pp">Passive serialization is a reader / writer synchronization + mechanism designed for lock-less read operations. The read operations may + happen from software interrupt at <code class="Dv">IPL_SOFTCLOCK</code>.</p> +<p class="Pp">See <a class="Xr">pserialize(9)</a>.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Passive_reference_mechanism"><a class="permalink" href="#Passive_reference_mechanism">Passive + reference mechanism</a></h2> +<p class="Pp">Passive references allow CPUs to cheaply acquire and release + passive references to a resource, which guarantee the resource will not be + destroyed until the reference is released. Acquiring and releasing passive + references requires no interprocessor synchronization, except when the + resource is pending destruction.</p> +<p class="Pp">See <a class="Xr">psref(9)</a>.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Localcount_mechanism"><a class="permalink" href="#Localcount_mechanism">Localcount + mechanism</a></h2> +<p class="Pp">Localcounts are used in the kernel to implement a medium-weight + reference counting mechanism. During normal operations, localcounts do not + need the interprocessor synchronization associated with + <a class="Xr">atomic_ops(3)</a> atomic memory operations, and (unlike + <a class="Xr">psref(9)</a>) localcount references can be held across sleeps + and can migrate between CPUs. Draining a localcount requires more expensive + interprocessor synchronization than <a class="Xr">atomic_ops(3)</a> (similar + to <a class="Xr">psref(9)</a>). And localcount references require eight + bytes of memory per object per-CPU, significantly more than + <a class="Xr">atomic_ops(3)</a> and almost always more than + <a class="Xr">psref(9)</a>.</p> +<p class="Pp">See <a class="Xr">localcount(9)</a>.</p> +</section> +<section class="Ss"> +<h2 class="Ss" id="Simple_do-it-in-thread-context_framework"><a class="permalink" href="#Simple_do-it-in-thread-context_framework">Simple + do-it-in-thread-context framework</a></h2> +<p class="Pp">The workqueue utility routines are provided to defer work which is + needed to be processed in a thread context.</p> +<p class="Pp">See <a class="Xr">workqueue(9)</a>.</p> +</section> +</section> +<section class="Sh"> +<h1 class="Sh" id="USAGE"><a class="permalink" href="#USAGE">USAGE</a></h1> +<p class="Pp">The following table describes in which contexts the use of the + <span class="Ux">NetBSD</span> kernel interfaces are valid. Synchronization + primitives which are available in more than one context can be used to + protect shared resources between the contexts they overlap.</p> +<table class="Bl-column Bd-indent"> + <tr id="interface"> + <td><a class="permalink" href="#interface"><b class="Sy">interface</b></a></td> + <td><a class="permalink" href="#thread"><b class="Sy" id="thread">thread</b></a></td> + <td><a class="permalink" href="#softirq"><b class="Sy" id="softirq">softirq</b></a></td> + <td><a class="permalink" href="#hardirq"><b class="Sy" id="hardirq">hardirq</b></a></td> + </tr> + <tr> + <td><a class="Xr">atomic_ops(3)</a></td> + <td>yes</td> + <td>yes</td> + <td>yes</td> + </tr> + <tr> + <td><a class="Xr">condvar(9)</a></td> + <td>yes</td> + <td>partly</td> + <td>no</td> + </tr> + <tr> + <td><a class="Xr">membar_ops(3)</a></td> + <td>yes</td> + <td>yes</td> + <td>yes</td> + </tr> + <tr> + <td><a class="Xr">mutex(9)</a></td> + <td>yes</td> + <td>depends</td> + <td>depends</td> + </tr> + <tr> + <td><a class="Xr">rwlock(9)</a></td> + <td>yes</td> + <td>yes</td> + <td>no</td> + </tr> + <tr> + <td><a class="Xr">softint(9)</a></td> + <td>yes</td> + <td>yes</td> + <td>yes</td> + </tr> + <tr> + <td><a class="Xr">spl(9)</a></td> + <td>yes</td> + <td>no</td> + <td>no</td> + </tr> + <tr> + <td><a class="Xr">splraiseipl(9)</a></td> + <td>yes</td> + <td>no</td> + <td>no</td> + </tr> + <tr> + <td><a class="Xr">pserialize(9)</a></td> + <td>yes</td> + <td>yes</td> + <td>no</td> + </tr> + <tr> + <td><a class="Xr">psref(9)</a></td> + <td>yes</td> + <td>yes</td> + <td>no</td> + </tr> + <tr> + <td><a class="Xr">localcount(9)</a></td> + <td>yes</td> + <td>yes</td> + <td>no</td> + </tr> + <tr> + <td><a class="Xr">workqueue(9)</a></td> + <td>yes</td> + <td>yes</td> + <td>yes</td> + </tr> +</table> +</section> +<section class="Sh"> +<h1 class="Sh" id="SEE_ALSO"><a class="permalink" href="#SEE_ALSO">SEE + ALSO</a></h1> +<p class="Pp"><a class="Xr">atomic_ops(3)</a>, <a class="Xr">membar_ops(3)</a>, + <a class="Xr">condvar(9)</a>, <a class="Xr">mutex(9)</a>, + <a class="Xr">ras(9)</a>, <a class="Xr">rwlock(9)</a>, + <a class="Xr">softint(9)</a>, <a class="Xr">spl(9)</a>, + <a class="Xr">splraiseipl(9)</a>, <a class="Xr">workqueue(9)</a></p> +</section> +<section class="Sh"> +<h1 class="Sh" id="HISTORY"><a class="permalink" href="#HISTORY">HISTORY</a></h1> +<p class="Pp">Initial SMP support was introduced in <span class="Ux">NetBSD + 2.0</span> and was designed with a giant kernel lock. Through + <span class="Ux">NetBSD 4.0</span>, the kernel used spinlocks and a per-CPU + interrupt priority level (the <a class="Xr">spl(9)</a> system). These + mechanisms did not lend themselves well to a multiprocessor environment + supporting kernel preemption. The use of thread based (lock) synchronization + was limited and the available synchronization primitive (lockmgr) was + inefficient and slow to execute. <span class="Ux">NetBSD 5.0</span> + introduced massive performance improvements on multicore hardware by Andrew + Doran. This work was sponsored by The <span class="Ux">NetBSD</span> + Foundation.</p> +<p class="Pp">A <code class="Nm">locking</code> manual first appeared in + <span class="Ux">NetBSD 8.0</span> and was inspired by the corresponding + <code class="Nm">locking</code> manuals in <span class="Ux">FreeBSD</span> + and <span class="Ux">DragonFly</span>.</p> +</section> +<section class="Sh"> +<h1 class="Sh" id="AUTHORS"><a class="permalink" href="#AUTHORS">AUTHORS</a></h1> +<p class="Pp"><span class="An">Kamil Rytarowski</span> + <<a class="Mt" href="mailto:kamil@NetBSD.org">kamil@NetBSD.org</a>>.</p> +</section> +</div> +<table class="foot"> + <tr> + <td class="foot-date">August 23, 2017</td> + <td class="foot-os">NetBSD 10.1</td> + </tr> +</table> |
