summaryrefslogtreecommitdiff
path: root/static/netbsd/man9/mutex.9
diff options
context:
space:
mode:
Diffstat (limited to 'static/netbsd/man9/mutex.9')
-rw-r--r--static/netbsd/man9/mutex.9319
1 files changed, 319 insertions, 0 deletions
diff --git a/static/netbsd/man9/mutex.9 b/static/netbsd/man9/mutex.9
new file mode 100644
index 00000000..63c0cfde
--- /dev/null
+++ b/static/netbsd/man9/mutex.9
@@ -0,0 +1,319 @@
+.\" $NetBSD: mutex.9,v 1.35 2023/02/01 03:27:45 gutteridge Exp $
+.\"
+.\" Copyright (c) 2007, 2009 The NetBSD Foundation, Inc.
+.\" All rights reserved.
+.\"
+.\" This code is derived from software contributed to The NetBSD Foundation
+.\" by Andrew Doran.
+.\"
+.\" Redistribution and use in source and binary forms, with or without
+.\" modification, are permitted provided that the following conditions
+.\" are met:
+.\" 1. Redistributions of source code must retain the above copyright
+.\" notice, this list of conditions and the following disclaimer.
+.\" 2. Redistributions in binary form must reproduce the above copyright
+.\" notice, this list of conditions and the following disclaimer in the
+.\" documentation and/or other materials provided with the distribution.
+.\"
+.\" THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+.\" TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+.\" PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+.\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+.\" POSSIBILITY OF SUCH DAMAGE.
+.\"
+.Dd December 8, 2017
+.Dt MUTEX 9
+.Os
+.Sh NAME
+.Nm mutex ,
+.Nm mutex_init ,
+.Nm mutex_destroy ,
+.Nm mutex_enter ,
+.Nm mutex_exit ,
+.Nm mutex_ownable ,
+.Nm mutex_owned ,
+.Nm mutex_spin_enter ,
+.Nm mutex_spin_exit ,
+.Nm mutex_tryenter
+.Nd mutual exclusion primitives
+.Sh SYNOPSIS
+.In sys/mutex.h
+.Ft void
+.Fn mutex_init "kmutex_t *mtx" "kmutex_type_t type" "int ipl"
+.Ft void
+.Fn mutex_destroy "kmutex_t *mtx"
+.Ft void
+.Fn mutex_enter "kmutex_t *mtx"
+.Ft void
+.Fn mutex_exit "kmutex_t *mtx"
+.Ft int
+.Fn mutex_ownable "kmutex_t *mtx"
+.Ft int
+.Fn mutex_owned "kmutex_t *mtx"
+.Ft void
+.Fn mutex_spin_enter "kmutex_t *mtx"
+.Ft void
+.Fn mutex_spin_exit "kmutex_t *mtx"
+.Ft int
+.Fn mutex_tryenter "kmutex_t *mtx"
+.Pp
+.Cd "options DIAGNOSTIC"
+.Cd "options LOCKDEBUG"
+.Sh DESCRIPTION
+Mutexes are used in the kernel to implement mutual exclusion among LWPs
+.Pq lightweight processes
+and interrupt handlers.
+.Pp
+The
+.Vt kmutex_t
+type provides storage for the mutex object.
+This should be treated as an opaque object and not examined directly by
+consumers.
+.Pp
+Mutexes replace the
+.Xr spl 9
+system traditionally used to provide synchronization between interrupt
+handlers and LWPs.
+.Sh OPTIONS
+The following kernel options have effect on mutex operations:
+.Bl -tag -width Cd
+.It Cd "options DIAGNOSTIC"
+Kernels compiled with the
+.Dv DIAGNOSTIC
+option perform basic sanity checks on mutex operations.
+.It Cd "options LOCKDEBUG"
+Kernels compiled with the
+.Dv LOCKDEBUG
+option perform potentially CPU intensive sanity checks
+on mutex operations.
+.El
+.Sh FUNCTIONS
+.Bl -tag -width Ds
+.It Fn mutex_init "mtx" "type" "ipl"
+Dynamically initialize a mutex for use.
+.Pp
+No other operations can be performed on a mutex until it has been initialized.
+Once initialized, all types of mutex are manipulated using the same interface.
+Note that
+.Fn mutex_init
+may block in order to allocate memory.
+.Pp
+The
+.Fa type
+argument must be given as
+.Dv MUTEX_DEFAULT .
+Other constants are defined but are for low-level system use and are not
+an endorsed, stable part of the interface.
+.Pp
+The type of mutex returned depends on the
+.Fa ipl
+argument:
+.Bl -tag -width Dv
+.It Dv IPL_NONE , No or one of the Dv IPL_SOFT* No constants
+An adaptive mutex will be returned.
+Adaptive mutexes provide mutual exclusion between LWPs,
+and between LWPs and soft interrupt handlers.
+.Pp
+Adaptive mutexes cannot be acquired from a hardware interrupt handler.
+An LWP may either sleep or busy-wait when attempting to acquire
+an adaptive mutex that is already held.
+.It Dv IPL_VM , IPL_SCHED , IPL_HIGH
+A spin mutex will be returned.
+Spin mutexes provide mutual exclusion between LWPs, and between LWPs
+and interrupt handlers.
+.Pp
+The
+.Fa ipl
+argument is used to pass a system interrupt priority level (IPL)
+that will block all interrupt handlers that may try to acquire the mutex.
+.Pp
+LWPs that own spin mutexes may not sleep, and therefore must not
+try to acquire adaptive mutexes or other sleep locks.
+.Pp
+A processor will always busy-wait when attempting to acquire
+a spin mutex that is already held.
+.Pp
+.Sy Note :
+Releasing a spin mutex may not lower the IPL to what it was when
+entered.
+If other spin mutexes are held, the IPL will not be lowered until the
+last one is released.
+.Pp
+This is usually not a problem because spin mutexes should held only for
+very short durations anyway, so blocking higher-priority interrupts a
+little longer doesn't hurt much.
+But it interferes with writing assertions that the IPL is
+.Em no higher than
+a specified level.
+.El
+.Pp
+See
+.Xr spl 9
+for further information on interrupt priority levels (IPLs).
+.It Fn mutex_destroy "mtx"
+Release resources used by a mutex.
+The mutex may not be used after it has been destroyed.
+.Fn mutex_destroy
+may block in order to free memory.
+.It Fn mutex_enter "mtx"
+Acquire a mutex.
+If the mutex is already held, the caller will block and not return until the
+mutex is acquired.
+.Pp
+All loads and stores after
+.Fn mutex_enter
+will not be reordered before it or served from a prior cache, and hence
+will
+.Em happen after
+any prior
+.Fn mutex_exit
+to release the mutex even on another CPU or in an interrupt.
+Thus, there is a global total ordering on all loads and stores under
+the same mutex.
+.Pp
+Mutexes and other types of locks must always be acquired in a
+consistent order with respect to each other.
+Otherwise, the potential for system deadlock exists.
+.Pp
+Adaptive mutexes and other types of lock that can sleep may
+not be acquired while a spin mutex is held by the caller.
+.Pp
+When acquiring a spin mutex, the IPL of the current CPU will be raised to
+the level set in
+.Fn mutex_init
+if it is not already equal or higher.
+.It Fn mutex_exit "mtx"
+Release a mutex.
+The mutex must have been previously acquired by the caller.
+Mutexes may be released out of order as needed.
+.Pp
+All loads and stores before
+.Fn mutex_exit
+will not be reordered after it or delayed in a write buffer, and hence
+will
+.Em happen before
+any subsequent
+.Fn mutex_enter
+to acquire the mutex even on another CPU or in an interrupt.
+Thus, there is a global total ordering on all loads and stores under
+the same mutex.
+.It Fn mutex_ownable "mtx"
+When compiled with
+.Dv LOCKDEBUG
+ensure that the current process can successfully acquire
+.Ar mtx .
+If
+.Ar mtx
+is already owned by the current process, the system will panic
+with a
+.Dq locking against myself\^
+error.
+.Pp
+This function is needed because
+.Fn mutex_owned
+does not differentiate if a spin mutex is owned by the current process
+vs owned by another process.
+.Fn mutex_ownable
+is reasonably heavy-weight, and should only be used with
+.Xr KDASSERT 9 .
+.It Fn mutex_owned "mtx"
+For adaptive mutexes, return non-zero if the current LWP holds the mutex.
+For spin mutexes, return non-zero if the mutex is held, potentially by the
+current processor.
+Otherwise, return zero.
+.Pp
+.Fn mutex_owned
+is provided for making diagnostic checks to verify that a lock is held.
+For example:
+.Dl KASSERT(mutex_owned(&driver_lock));
+.Pp
+It should not be used to make locking decisions at run time.
+For spin mutexes, it must not be used to verify that a lock is not held.
+.It Fn mutex_spin_enter "mtx"
+Equivalent to
+.Fn mutex_enter ,
+but may only be used when it is known that
+.Ar mtx
+is a spin mutex.
+Implies the same memory ordering as
+.Fn mutex_enter .
+On some architectures, this can substantially reduce the cost of acquiring
+a spin mutex.
+.It Fn mutex_spin_exit "mtx"
+Equivalent to
+.Fn mutex_exit ,
+but may only be used when it is known that
+.Ar mtx
+is a spin mutex.
+Implies the same memory ordering as
+.Fn mutex_exit .
+On some architectures, this can substantially reduce the cost of releasing
+a spin mutex.
+.It Fn mutex_tryenter "mtx"
+Try to acquire a mutex, but do not block if the mutex is already held.
+Returns non-zero if the mutex was acquired, or zero if the mutex was
+already held.
+.Pp
+.Fn mutex_tryenter
+can be used as an optimization when acquiring locks in the wrong order.
+For example, in a setting where the convention is that
+.Va first_lock
+must be acquired before
+.Va second_lock ,
+the following can be used to optimistically lock in reverse order:
+.Bd -literal -offset indent
+/* We hold second_lock, but not first_lock. */
+KASSERT(mutex_owned(&second_lock));
+
+if (!mutex_tryenter(&first_lock)) {
+ /* Failed to get it - lock in the correct order. */
+ mutex_exit(&second_lock);
+ mutex_enter(&first_lock);
+ mutex_enter(&second_lock);
+
+ /*
+ * We may need to recheck any conditions the code
+ * path depends on, as we released second_lock
+ * briefly.
+ */
+}
+.Ed
+.El
+.Sh CODE REFERENCES
+The core of the mutex implementation is in
+.Pa sys/kern/kern_mutex.c .
+.Pp
+The header file
+.Pa sys/sys/mutex.h
+describes the public interface, and interfaces that machine-dependent
+code must provide to support mutexes.
+.Sh SEE ALSO
+.Xr atomic_ops 3 ,
+.Xr membar_ops 3 ,
+.Xr options 4 ,
+.Xr lockstat 8 ,
+.Xr condvar 9 ,
+.Xr kpreempt 9 ,
+.Xr rwlock 9 ,
+.Xr spl 9
+.Pp
+.Rs
+.%A Jim Mauro
+.%A Richard McDougall
+.%T Solaris Internals: Core Kernel Architecture
+.%I Prentice Hall
+.%D 2001
+.%O ISBN 0-13-022496-0
+.Re
+.Sh HISTORY
+The mutex primitives first appeared in
+.Nx 5.0 .
+.Fn mutex_ownable
+first appeared in
+.Nx 8.0 .