summaryrefslogtreecommitdiff
path: root/static/netbsd/man9/uvm.9 3.html
diff options
context:
space:
mode:
authorJacob McDonnell <jacob@jacobmcdonnell.com>2026-04-25 19:55:15 -0400
committerJacob McDonnell <jacob@jacobmcdonnell.com>2026-04-25 19:55:15 -0400
commit253e67c8b3a72b3a4757fdbc5845297628db0a4a (patch)
treeadf53b66087aa30dfbf8bf391a1dadb044c3bf4d /static/netbsd/man9/uvm.9 3.html
parenta9157ce950dfe2fc30795d43b9d79b9d1bffc48b (diff)
docs: Added All NetBSD Manuals
Diffstat (limited to 'static/netbsd/man9/uvm.9 3.html')
-rw-r--r--static/netbsd/man9/uvm.9 3.html580
1 files changed, 580 insertions, 0 deletions
diff --git a/static/netbsd/man9/uvm.9 3.html b/static/netbsd/man9/uvm.9 3.html
new file mode 100644
index 00000000..19108045
--- /dev/null
+++ b/static/netbsd/man9/uvm.9 3.html
@@ -0,0 +1,580 @@
+<table class="head">
+ <tr>
+ <td class="head-ltitle">UVM(9)</td>
+ <td class="head-vol">Kernel Developer's Manual</td>
+ <td class="head-rtitle">UVM(9)</td>
+ </tr>
+</table>
+<div class="manual-text">
+<section class="Sh">
+<h1 class="Sh" id="NAME"><a class="permalink" href="#NAME">NAME</a></h1>
+<p class="Pp"><code class="Nm">uvm</code> &#x2014; <span class="Nd">virtual
+ memory system external interface</span></p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="SYNOPSIS"><a class="permalink" href="#SYNOPSIS">SYNOPSIS</a></h1>
+<p class="Pp"><code class="In">#include
+ &lt;<a class="In">sys/param.h</a>&gt;</code>
+ <br/>
+ <code class="In">#include &lt;<a class="In">uvm/uvm.h</a>&gt;</code></p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="DESCRIPTION"><a class="permalink" href="#DESCRIPTION">DESCRIPTION</a></h1>
+<p class="Pp">The UVM virtual memory system manages access to the computer's
+ memory resources. User processes and the kernel access these resources
+ through UVM's external interface. UVM's external interface includes
+ functions that:</p>
+<p class="Pp"></p>
+<ul class="Bl-dash Bl-compact">
+ <li>initialize UVM sub-systems</li>
+ <li>manage virtual address spaces</li>
+ <li>resolve page faults</li>
+ <li>memory map files and devices</li>
+ <li>perform uio-based I/O to virtual memory</li>
+ <li>allocate and free kernel virtual memory</li>
+ <li>allocate and free physical memory</li>
+</ul>
+<p class="Pp">In addition to exporting these services, UVM has two kernel-level
+ processes: pagedaemon and swapper. The pagedaemon process sleeps until
+ physical memory becomes scarce. When that happens, pagedaemon is awoken. It
+ scans physical memory, paging out and freeing memory that has not been
+ recently used. The swapper process swaps in runnable processes that are
+ currently swapped out, if there is room.</p>
+<p class="Pp">There are also several miscellaneous functions.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="INITIALIZATION"><a class="permalink" href="#INITIALIZATION">INITIALIZATION</a></h1>
+<dl class="Bl-ohang">
+ <dt id="uvm_init"><var class="Ft">void</var></dt>
+ <dd><a class="permalink" href="#uvm_init"><code class="Fn">uvm_init</code></a>(<var class="Fa">void</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_init_limits</code>(<var class="Fa">struct lwp
+ *l</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_setpagesize</code>(<var class="Fa">void</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_swap_init</code>(<var class="Fa">void</var>);</dd>
+</dl>
+<p class="Pp" id="uvm_init~2"><a class="permalink" href="#uvm_init~2"><code class="Fn">uvm_init</code></a>()
+ sets up the UVM system at system boot time, after the console has been
+ setup. It initializes global state, the page, map, kernel virtual memory
+ state, machine-dependent physical map, kernel memory allocator, pager and
+ anonymous memory sub-systems, and then enables paging of kernel objects.</p>
+<p class="Pp" id="uvm_init_limits"><a class="permalink" href="#uvm_init_limits"><code class="Fn">uvm_init_limits</code></a>()
+ initializes process limits for the named process. This is for use by the
+ system startup for process zero, before any other processes are created.</p>
+<p class="Pp" id="uvm_md_init"><a class="permalink" href="#uvm_md_init"><code class="Fn">uvm_md_init</code></a>()
+ does early boot initialization. This currently includes:
+ <a class="permalink" href="#uvm_setpagesize"><code class="Fn" id="uvm_setpagesize">uvm_setpagesize</code></a>()
+ which initializes the uvmexp members pagesize (if not already done by
+ machine-dependent code), pageshift and pagemask.
+ <a class="permalink" href="#uvm_physseg_init"><code class="Fn" id="uvm_physseg_init">uvm_physseg_init</code></a>()
+ which initialises the <a class="Xr">uvm_hotplug(9)</a> subsystem. It should
+ be called by machine-dependent code early in the
+ <a class="permalink" href="#pmap_init"><code class="Fn" id="pmap_init">pmap_init</code></a>()
+ call (see <a class="Xr">pmap(9)</a>).</p>
+<p class="Pp" id="uvm_swap_init"><a class="permalink" href="#uvm_swap_init"><code class="Fn">uvm_swap_init</code></a>()
+ initializes the swap sub-system.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="VIRTUAL_ADDRESS_SPACE_MANAGEMENT"><a class="permalink" href="#VIRTUAL_ADDRESS_SPACE_MANAGEMENT">VIRTUAL
+ ADDRESS SPACE MANAGEMENT</a></h1>
+<p class="Pp">See <a class="Xr">uvm_map(9)</a>.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="PAGE_FAULT_HANDLING"><a class="permalink" href="#PAGE_FAULT_HANDLING">PAGE
+ FAULT HANDLING</a></h1>
+<dl class="Bl-ohang">
+ <dt><var class="Ft">int</var></dt>
+ <dd><code class="Fn">uvm_fault</code>(<var class="Fa">struct vm_map
+ *orig_map</var>, <var class="Fa">vaddr_t vaddr</var>,
+ <var class="Fa">vm_prot_t access_type</var>);</dd>
+</dl>
+<p class="Pp" id="uvm_fault"><a class="permalink" href="#uvm_fault"><code class="Fn">uvm_fault</code></a>()
+ is the main entry point for faults. It takes <var class="Fa">orig_map</var>
+ as the map the fault originated in, a <var class="Fa">vaddr</var> offset
+ into the map the fault occurred, and <var class="Fa">access_type</var>
+ describing the type of access requested. <code class="Fn">uvm_fault</code>()
+ returns a standard UVM return value.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="MEMORY_MAPPING_FILES_AND_DEVICES"><a class="permalink" href="#MEMORY_MAPPING_FILES_AND_DEVICES">MEMORY
+ MAPPING FILES AND DEVICES</a></h1>
+<p class="Pp">See <a class="Xr">ubc(9)</a>.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="VIRTUAL_MEMORY_I/O"><a class="permalink" href="#VIRTUAL_MEMORY_I/O">VIRTUAL
+ MEMORY I/O</a></h1>
+<dl class="Bl-ohang">
+ <dt><var class="Ft">int</var></dt>
+ <dd><code class="Fn">uvm_io</code>(<var class="Fa">struct vm_map *map</var>,
+ <var class="Fa">struct uio *uio</var>);</dd>
+</dl>
+<p class="Pp" id="uvm_io"><a class="permalink" href="#uvm_io"><code class="Fn">uvm_io</code></a>()
+ performs the I/O described in <var class="Fa">uio</var> on the memory
+ described in <var class="Fa">map</var>.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="ALLOCATION_OF_KERNEL_MEMORY"><a class="permalink" href="#ALLOCATION_OF_KERNEL_MEMORY">ALLOCATION
+ OF KERNEL MEMORY</a></h1>
+<p class="Pp">See <a class="Xr">uvm_km(9)</a>.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="ALLOCATION_OF_PHYSICAL_MEMORY"><a class="permalink" href="#ALLOCATION_OF_PHYSICAL_MEMORY">ALLOCATION
+ OF PHYSICAL MEMORY</a></h1>
+<dl class="Bl-ohang">
+ <dt><var class="Ft">struct vm_page *</var></dt>
+ <dd><code class="Fn">uvm_pagealloc</code>(<var class="Fa">struct uvm_object
+ *uobj</var>, <var class="Fa">voff_t off</var>, <var class="Fa">struct
+ vm_anon *anon</var>, <var class="Fa">int flags</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_pagerealloc</code>(<var class="Fa">struct vm_page
+ *pg</var>, <var class="Fa">struct uvm_object *newobj</var>,
+ <var class="Fa">voff_t newoff</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_pagefree</code>(<var class="Fa">struct vm_page
+ *pg</var>);</dd>
+ <dt><var class="Ft">int</var></dt>
+ <dd><code class="Fn">uvm_pglistalloc</code>(<var class="Fa">psize_t
+ size</var>, <var class="Fa">paddr_t low</var>, <var class="Fa">paddr_t
+ high</var>, <var class="Fa">paddr_t alignment</var>,
+ <var class="Fa">paddr_t boundary</var>, <var class="Fa">struct pglist
+ *rlist</var>, <var class="Fa">int nsegs</var>, <var class="Fa">int
+ waitok</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_pglistfree</code>(<var class="Fa">struct pglist
+ *list</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_page_physload</code>(<var class="Fa">paddr_t
+ start</var>, <var class="Fa">paddr_t end</var>, <var class="Fa">paddr_t
+ avail_start</var>, <var class="Fa">paddr_t avail_end</var>,
+ <var class="Fa">int free_list</var>);</dd>
+</dl>
+<p class="Pp" id="uvm_pagealloc"><a class="permalink" href="#uvm_pagealloc"><code class="Fn">uvm_pagealloc</code></a>()
+ allocates a page of memory at virtual address <var class="Fa">off</var> in
+ either the object <var class="Fa">uobj</var> or the anonymous memory
+ <var class="Fa">anon</var>, which must be locked by the caller. Only one of
+ <var class="Fa">uobj</var> and <var class="Fa">anon</var> can be non
+ <code class="Dv">NULL</code>. Returns <code class="Dv">NULL</code> when no
+ page can be found. The flags can be any of</p>
+<div class="Bd Pp Li">
+<pre>#define UVM_PGA_USERESERVE 0x0001 /* ok to use reserve pages */
+#define UVM_PGA_ZERO 0x0002 /* returned page must be zero'd */</pre>
+</div>
+<p class="Pp"><code class="Dv">UVM_PGA_USERESERVE</code> means to allocate a
+ page even if that will result in the number of free pages being lower than
+ <code class="Dv">uvmexp.reserve_pagedaemon</code> (if the current thread is
+ the pagedaemon) or <code class="Dv">uvmexp.reserve_kernel</code> (if the
+ current thread is not the pagedaemon). <code class="Dv">UVM_PGA_ZERO</code>
+ causes the returned page to be filled with zeroes, either by allocating it
+ from a pool of pre-zeroed pages or by zeroing it in-line as necessary.</p>
+<p class="Pp" id="uvm_pagerealloc"><a class="permalink" href="#uvm_pagerealloc"><code class="Fn">uvm_pagerealloc</code></a>()
+ reallocates page <var class="Fa">pg</var> to a new object
+ <var class="Fa">newobj</var>, at a new offset
+ <var class="Fa">newoff</var>.</p>
+<p class="Pp" id="uvm_pagefree"><a class="permalink" href="#uvm_pagefree"><code class="Fn">uvm_pagefree</code></a>()
+ frees the physical page <var class="Fa">pg</var>. If the content of the page
+ is known to be zero-filled, caller should set
+ <code class="Dv">PG_ZERO</code> in pg-&gt;flags so that the page allocator
+ will use the page to serve future <code class="Dv">UVM_PGA_ZERO</code>
+ requests efficiently.</p>
+<p class="Pp" id="uvm_pglistalloc"><a class="permalink" href="#uvm_pglistalloc"><code class="Fn">uvm_pglistalloc</code></a>()
+ allocates a list of pages for size <var class="Fa">size</var> byte under
+ various constraints. <var class="Fa">low</var> and
+ <var class="Fa">high</var> describe the lowest and highest addresses
+ acceptable for the list. If <var class="Fa">alignment</var> is non-zero, it
+ describes the required alignment of the list, in power-of-two notation. If
+ <var class="Fa">boundary</var> is non-zero, no segment of the list may cross
+ this power-of-two boundary, relative to zero. <var class="Fa">nsegs</var> is
+ the maximum number of physically contiguous segments. If
+ <var class="Fa">waitok</var> is non-zero, the function may sleep until
+ enough memory is available. (It also may give up in some situations, so a
+ non-zero <var class="Fa">waitok</var> does not imply that
+ <code class="Fn">uvm_pglistalloc</code>() cannot return an error.) The
+ allocated memory is returned in the <var class="Fa">rlist</var> list; the
+ caller has to provide storage only, the list is initialized by
+ <code class="Fn">uvm_pglistalloc</code>().</p>
+<p class="Pp" id="uvm_pglistfree"><a class="permalink" href="#uvm_pglistfree"><code class="Fn">uvm_pglistfree</code></a>()
+ frees the list of pages pointed to by <var class="Fa">list</var>. If the
+ content of the page is known to be zero-filled, caller should set
+ <code class="Dv">PG_ZERO</code> in pg-&gt;flags so that the page allocator
+ will use the page to serve future <code class="Dv">UVM_PGA_ZERO</code>
+ requests efficiently.</p>
+<p class="Pp" id="uvm_page_physload"><a class="permalink" href="#uvm_page_physload"><code class="Fn">uvm_page_physload</code></a>()
+ loads physical memory segments into VM space on the specified
+ <var class="Fa">free_list</var>. It must be called at system boot time to
+ set up physical memory management pages. The arguments describe the
+ <var class="Fa">start</var> and <var class="Fa">end</var> of the physical
+ addresses of the segment, and the available start and end addresses of pages
+ not already in use. If a system has memory banks of different speeds the
+ slower memory should be given a higher <var class="Fa">free_list</var>
+ value.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="PROCESSES"><a class="permalink" href="#PROCESSES">PROCESSES</a></h1>
+<dl class="Bl-ohang">
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_pageout</code>(<var class="Fa">void</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_scheduler</code>(<var class="Fa">void</var>);</dd>
+</dl>
+<p class="Pp" id="uvm_pageout"><a class="permalink" href="#uvm_pageout"><code class="Fn">uvm_pageout</code></a>()
+ is the main loop for the page daemon.</p>
+<p class="Pp" id="uvm_scheduler"><a class="permalink" href="#uvm_scheduler"><code class="Fn">uvm_scheduler</code></a>()
+ is the process zero main loop, which is to be called after the system has
+ finished starting other processes. It handles the swapping in of runnable,
+ swapped out processes in priority order.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="PAGE_LOAN"><a class="permalink" href="#PAGE_LOAN">PAGE
+ LOAN</a></h1>
+<dl class="Bl-ohang">
+ <dt><var class="Ft">int</var></dt>
+ <dd><code class="Fn">uvm_loan</code>(<var class="Fa">struct vm_map *map</var>,
+ <var class="Fa">vaddr_t start</var>, <var class="Fa">vsize_t len</var>,
+ <var class="Fa">void *v</var>, <var class="Fa">int flags</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_unloan</code>(<var class="Fa">void *v</var>,
+ <var class="Fa">int npages</var>, <var class="Fa">int flags</var>);</dd>
+</dl>
+<p class="Pp" id="uvm_loan"><a class="permalink" href="#uvm_loan"><code class="Fn">uvm_loan</code></a>()
+ loans pages in a map out to anons or to the kernel.
+ <var class="Fa">map</var> should be unlocked, <var class="Fa">start</var>
+ and <var class="Fa">len</var> should be multiples of
+ <code class="Dv">PAGE_SIZE</code>. Argument <var class="Fa">flags</var>
+ should be one of</p>
+<div class="Bd Pp Li">
+<pre>#define UVM_LOAN_TOANON 0x01 /* loan to anons */
+#define UVM_LOAN_TOPAGE 0x02 /* loan to kernel */</pre>
+</div>
+<p class="Pp" id="uvm_loan~2"><var class="Fa">v</var> should be pointer to array
+ of pointers to <code class="Li">struct anon</code> or
+ <code class="Li">struct vm_page</code>, as appropriate. The caller has to
+ allocate memory for the array and ensure it's big enough to hold
+ <var class="Fa">len / PAGE_SIZE</var> pointers. Returns 0 for success, or
+ appropriate error number otherwise. Note that wired pages can't be loaned
+ out and
+ <a class="permalink" href="#uvm_loan~2"><code class="Fn">uvm_loan</code></a>()
+ will fail in that case.</p>
+<p class="Pp" id="uvm_unloan"><a class="permalink" href="#uvm_unloan"><code class="Fn">uvm_unloan</code></a>()
+ kills loans on pages or anons. The <var class="Fa">v</var> must point to the
+ array of pointers initialized by previous call to
+ <code class="Fn">uvm_loan</code>(). <var class="Fa">npages</var> should
+ match number of pages allocated for loan, this also matches number of items
+ in the array. Argument <var class="Fa">flags</var> should be one of</p>
+<div class="Bd Pp Li">
+<pre>#define UVM_LOAN_TOANON 0x01 /* loan to anons */
+#define UVM_LOAN_TOPAGE 0x02 /* loan to kernel */</pre>
+</div>
+<p class="Pp" id="uvm_loan~3">and should match what was used for previous call
+ to
+ <a class="permalink" href="#uvm_loan~3"><code class="Fn">uvm_loan</code></a>().</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="MISCELLANEOUS_FUNCTIONS"><a class="permalink" href="#MISCELLANEOUS_FUNCTIONS">MISCELLANEOUS
+ FUNCTIONS</a></h1>
+<dl class="Bl-ohang">
+ <dt><var class="Ft">struct uvm_object *</var></dt>
+ <dd><code class="Fn">uao_create</code>(<var class="Fa">vsize_t size</var>,
+ <var class="Fa">int flags</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uao_detach</code>(<var class="Fa">struct uvm_object
+ *uobj</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uao_reference</code>(<var class="Fa">struct uvm_object
+ *uobj</var>);</dd>
+ <dt><var class="Ft">bool</var></dt>
+ <dd><code class="Fn">uvm_chgkprot</code>(<var class="Fa">void *addr</var>,
+ <var class="Fa">size_t len</var>, <var class="Fa">int rw</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_kernacc</code>(<var class="Fa">void *addr</var>,
+ <var class="Fa">size_t len</var>, <var class="Fa">int rw</var>);</dd>
+ <dt><var class="Ft">int</var></dt>
+ <dd><code class="Fn">uvm_vslock</code>(<var class="Fa">struct vmspace
+ *vs</var>, <var class="Fa">void *addr</var>, <var class="Fa">size_t
+ len</var>, <var class="Fa">vm_prot_t prot</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_vsunlock</code>(<var class="Fa">struct vmspace
+ *vs</var>, <var class="Fa">void *addr</var>, <var class="Fa">size_t
+ len</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_meter</code>(<var class="Fa">void</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_proc_fork</code>(<var class="Fa">struct proc
+ *p1</var>, <var class="Fa">struct proc *p2</var>, <var class="Fa">bool
+ shared</var>);</dd>
+ <dt><var class="Ft">int</var></dt>
+ <dd><code class="Fn">uvm_grow</code>(<var class="Fa">struct proc *p</var>,
+ <var class="Fa">vaddr_t sp</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvn_findpages</code>(<var class="Fa">struct uvm_object
+ *uobj</var>, <var class="Fa">voff_t offset</var>, <var class="Fa">int
+ *npagesp</var>, <var class="Fa">struct vm_page **pps</var>,
+ <var class="Fa">int flags</var>);</dd>
+ <dt><var class="Ft">void</var></dt>
+ <dd><code class="Fn">uvm_vnp_setsize</code>(<var class="Fa">struct vnode
+ *vp</var>, <var class="Fa">voff_t newsize</var>);</dd>
+</dl>
+<p class="Pp" id="uao_create">The
+ <a class="permalink" href="#uao_create"><code class="Fn">uao_create</code></a>(),
+ <a class="permalink" href="#uao_detach"><code class="Fn" id="uao_detach">uao_detach</code></a>(),
+ and <code class="Fn">uao_reference</code>() functions operate on anonymous
+ memory objects, such as those used to support System V shared memory.
+ <code class="Fn">uao_create</code>() returns an object of size
+ <var class="Fa">size</var> with flags:</p>
+<div class="Bd Pp Li">
+<pre>#define UAO_FLAG_KERNOBJ 0x1 /* create kernel object */
+#define UAO_FLAG_KERNSWAP 0x2 /* enable kernel swap */</pre>
+</div>
+<p class="Pp" id="uao_reference">which can only be used once each at system boot
+ time.
+ <a class="permalink" href="#uao_reference"><code class="Fn">uao_reference</code></a>()
+ creates an additional reference to the named anonymous memory object.
+ <a class="permalink" href="#uao_detach~2"><code class="Fn" id="uao_detach~2">uao_detach</code></a>()
+ removes a reference from the named anonymous memory object, destroying it if
+ removing the last reference.</p>
+<p class="Pp" id="uvm_chgkprot"><a class="permalink" href="#uvm_chgkprot"><code class="Fn">uvm_chgkprot</code></a>()
+ changes the protection of kernel memory from <var class="Fa">addr</var> to
+ <var class="Fa">addr + len</var> to the value of <var class="Fa">rw</var>.
+ This is primarily useful for debuggers, for setting breakpoints. This
+ function is only available with options <code class="Dv">KGDB</code>.</p>
+<p class="Pp" id="uvm_kernacc"><a class="permalink" href="#uvm_kernacc"><code class="Fn">uvm_kernacc</code></a>()
+ checks the access at address <var class="Fa">addr</var> to
+ <var class="Fa">addr + len</var> for <var class="Fa">rw</var> access in the
+ kernel address space.</p>
+<p class="Pp" id="uvm_vslock"><a class="permalink" href="#uvm_vslock"><code class="Fn">uvm_vslock</code></a>()
+ and
+ <a class="permalink" href="#uvm_vsunlock"><code class="Fn" id="uvm_vsunlock">uvm_vsunlock</code></a>()
+ control the wiring and unwiring of pages for process <var class="Fa">p</var>
+ from <var class="Fa">addr</var> to <var class="Fa">addr + len</var>. These
+ functions are normally used to wire memory for I/O.</p>
+<p class="Pp" id="uvm_meter"><a class="permalink" href="#uvm_meter"><code class="Fn">uvm_meter</code></a>()
+ calculates the load average.</p>
+<p class="Pp" id="uvm_proc_fork"><a class="permalink" href="#uvm_proc_fork"><code class="Fn">uvm_proc_fork</code></a>()
+ forks a virtual address space for process' (old) <var class="Fa">p1</var>
+ and (new) <var class="Fa">p2</var>. If the <var class="Fa">shared</var>
+ argument is non zero, p1 shares its address space with p2, otherwise a new
+ address space is created. This function currently has no return value, and
+ thus cannot fail. In the future, this function will be changed to allow it
+ to fail in low memory conditions.</p>
+<p class="Pp" id="uvm_grow"><a class="permalink" href="#uvm_grow"><code class="Fn">uvm_grow</code></a>()
+ increases the stack segment of process <var class="Fa">p</var> to include
+ <var class="Fa">sp</var>.</p>
+<p class="Pp" id="uvn_findpages"><a class="permalink" href="#uvn_findpages"><code class="Fn">uvn_findpages</code></a>()
+ looks up or creates pages in <var class="Fa">uobj</var> at offset
+ <var class="Fa">offset</var>, marks them busy and returns them in the
+ <var class="Fa">pps</var> array. Currently <var class="Fa">uobj</var> must
+ be a vnode object. The number of pages requested is pointed to by
+ <var class="Fa">npagesp</var>, and this value is updated with the actual
+ number of pages returned. The flags can be any bitwise inclusive-or of:</p>
+<p class="Pp"></p>
+<div class="Bd-indent">
+<dl class="Bl-tag Bl-compact">
+ <dt id="UFP_ALL"><a class="permalink" href="#UFP_ALL"><code class="Dv">UFP_ALL</code></a></dt>
+ <dd>Zero pseudo-flag meaning return all pages.</dd>
+ <dt id="UFP_NOWAIT"><a class="permalink" href="#UFP_NOWAIT"><code class="Dv">UFP_NOWAIT</code></a></dt>
+ <dd>Don't sleep &#x2014; yield <code class="Dv">NULL</code> for busy pages or
+ for uncached pages for which allocation would sleep.</dd>
+ <dt id="UFP_NOALLOC"><a class="permalink" href="#UFP_NOALLOC"><code class="Dv">UFP_NOALLOC</code></a></dt>
+ <dd>Don't allocate &#x2014; yield <code class="Dv">NULL</code> for uncached
+ pages.</dd>
+ <dt id="UFP_NOCACHE"><a class="permalink" href="#UFP_NOCACHE"><code class="Dv">UFP_NOCACHE</code></a></dt>
+ <dd>Don't use cached pages &#x2014; yield <code class="Dv">NULL</code>
+ instead.</dd>
+ <dt id="UFP_NORDONLY"><a class="permalink" href="#UFP_NORDONLY"><code class="Dv">UFP_NORDONLY</code></a></dt>
+ <dd>Don't yield read-only pages &#x2014; yield <code class="Dv">NULL</code>
+ for pages marked <code class="Dv">PG_READONLY</code>.</dd>
+ <dt id="UFP_DIRTYONLY"><a class="permalink" href="#UFP_DIRTYONLY"><code class="Dv">UFP_DIRTYONLY</code></a></dt>
+ <dd>Don't yield clean pages &#x2014; stop early at the first clean one. As a
+ side effect, mark yielded dirty pages clean. Caller must write them to
+ permanent storage before unbusying.</dd>
+ <dt id="UFP_BACKWARD"><a class="permalink" href="#UFP_BACKWARD"><code class="Dv">UFP_BACKWARD</code></a></dt>
+ <dd>Traverse pages in reverse order. If
+ <a class="permalink" href="#uvn_findpages~2"><code class="Fn" id="uvn_findpages~2">uvn_findpages</code></a>()
+ returns early, it will have filled
+ <code class="Li">*</code><var class="Fa">npagesp</var> entries at the end
+ of <var class="Fa">pps</var> rather than the beginning.</dd>
+</dl>
+</div>
+<p class="Pp" id="uvm_vnp_setsize"><a class="permalink" href="#uvm_vnp_setsize"><code class="Fn">uvm_vnp_setsize</code></a>()
+ sets the size of vnode <var class="Fa">vp</var> to
+ <var class="Fa">newsize</var>. Caller must hold a reference to the vnode. If
+ the vnode shrinks, pages no longer used are discarded.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="MISCELLANEOUS_MACROS"><a class="permalink" href="#MISCELLANEOUS_MACROS">MISCELLANEOUS
+ MACROS</a></h1>
+<dl class="Bl-ohang">
+ <dt><var class="Ft">paddr_t</var></dt>
+ <dd><code class="Fn">atop</code>(<var class="Fa">paddr_t pa</var>);</dd>
+ <dt><var class="Ft">paddr_t</var></dt>
+ <dd><code class="Fn">ptoa</code>(<var class="Fa">paddr_t pn</var>);</dd>
+ <dt><var class="Ft">paddr_t</var></dt>
+ <dd><code class="Fn">round_page</code>(<var class="Fa">address</var>);</dd>
+ <dt><var class="Ft">paddr_t</var></dt>
+ <dd><code class="Fn">trunc_page</code>(<var class="Fa">address</var>);</dd>
+</dl>
+<p class="Pp" id="atop">The
+ <a class="permalink" href="#atop"><code class="Fn">atop</code></a>() macro
+ converts a physical address <var class="Fa">pa</var> into a page number. The
+ <a class="permalink" href="#ptoa"><code class="Fn" id="ptoa">ptoa</code></a>()
+ macro does the opposite by converting a page number <var class="Fa">pn</var>
+ into a physical address.</p>
+<p class="Pp" id="round_page"><a class="permalink" href="#round_page"><code class="Fn">round_page</code></a>()
+ and
+ <a class="permalink" href="#trunc_page"><code class="Fn" id="trunc_page">trunc_page</code></a>()
+ macros return a page address boundary from rounding
+ <var class="Fa">address</var> up and down, respectively, to the nearest page
+ boundary. These macros work for either addresses or byte counts.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="SYSCTL"><a class="permalink" href="#SYSCTL">SYSCTL</a></h1>
+<p class="Pp">UVM provides support for the <code class="Dv">CTL_VM</code> domain
+ of the <a class="Xr">sysctl(3)</a> hierarchy. It handles the
+ <code class="Dv">VM_LOADAVG</code>, <code class="Dv">VM_METER</code>,
+ <code class="Dv">VM_UVMEXP</code>, and <code class="Dv">VM_UVMEXP2</code>
+ nodes, which return the current load averages, calculates current VM totals,
+ returns the uvmexp structure, and a kernel version independent view of the
+ uvmexp structure, respectively. It also exports a number of tunables that
+ control how much VM space is allowed to be consumed by various tasks. The
+ load averages are typically accessed from userland using the
+ <a class="Xr">getloadavg(3)</a> function. The uvmexp structure has all
+ global state of the UVM system, and has the following members:</p>
+<div class="Bd Pp Li">
+<pre>/* vm_page constants */
+int pagesize; /* size of a page (PAGE_SIZE): must be power of 2 */
+int pagemask; /* page mask */
+int pageshift; /* page shift */
+
+/* vm_page counters */
+int npages; /* number of pages we manage */
+int free; /* number of free pages */
+int paging; /* number of pages in the process of being paged out */
+int wired; /* number of wired pages */
+int reserve_pagedaemon; /* number of pages reserved for pagedaemon */
+int reserve_kernel; /* number of pages reserved for kernel */
+
+/* pageout params */
+int freemin; /* min number of free pages */
+int freetarg; /* target number of free pages */
+int inactarg; /* target number of inactive pages */
+int wiredmax; /* max number of wired pages */
+
+/* swap */
+int nswapdev; /* number of configured swap devices in system */
+int swpages; /* number of PAGE_SIZE'ed swap pages */
+int swpginuse; /* number of swap pages in use */
+int nswget; /* number of times fault calls uvm_swap_get() */
+int nanon; /* number total of anon's in system */
+int nfreeanon; /* number of free anon's */
+
+/* stat counters */
+int faults; /* page fault count */
+int traps; /* trap count */
+int intrs; /* interrupt count */
+int swtch; /* context switch count */
+int softs; /* software interrupt count */
+int syscalls; /* system calls */
+int pageins; /* pagein operation count */
+ /* pageouts are in pdpageouts below */
+int pgswapin; /* pages swapped in */
+int pgswapout; /* pages swapped out */
+int forks; /* forks */
+int forks_ppwait; /* forks where parent waits */
+int forks_sharevm; /* forks where vmspace is shared */
+
+/* fault subcounters */
+int fltnoram; /* number of times fault was out of ram */
+int fltnoanon; /* number of times fault was out of anons */
+int fltpgwait; /* number of times fault had to wait on a page */
+int fltpgrele; /* number of times fault found a released page */
+int fltrelck; /* number of times fault relock called */
+int fltrelckok; /* number of times fault relock is a success */
+int fltanget; /* number of times fault gets anon page */
+int fltanretry; /* number of times fault retrys an anon get */
+int fltamcopy; /* number of times fault clears &quot;needs copy&quot; */
+int fltnamap; /* number of times fault maps a neighbor anon page */
+int fltnomap; /* number of times fault maps a neighbor obj page */
+int fltlget; /* number of times fault does a locked pgo_get */
+int fltget; /* number of times fault does an unlocked get */
+int flt_anon; /* number of times fault anon (case 1a) */
+int flt_acow; /* number of times fault anon cow (case 1b) */
+int flt_obj; /* number of times fault is on object page (2a) */
+int flt_prcopy; /* number of times fault promotes with copy (2b) */
+int flt_przero; /* number of times fault promotes with zerofill (2b) */
+
+/* daemon counters */
+int pdwoke; /* number of times daemon woke up */
+int pdrevs; /* number of times daemon rev'd clock hand */
+int pdfreed; /* number of pages daemon freed since boot */
+int pdscans; /* number of pages daemon scanned since boot */
+int pdanscan; /* number of anonymous pages scanned by daemon */
+int pdobscan; /* number of object pages scanned by daemon */
+int pdreact; /* number of pages daemon reactivated since boot */
+int pdbusy; /* number of times daemon found a busy page */
+int pdpageouts; /* number of times daemon started a pageout */
+int pddeact; /* number of pages daemon deactivates */</pre>
+</div>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="NOTES"><a class="permalink" href="#NOTES">NOTES</a></h1>
+<p class="Pp"><code class="Fn">uvm_chgkprot</code>() is only available if the
+ kernel has been compiled with options <code class="Dv">KGDB</code>.</p>
+<p class="Pp">All structure and types whose names begin with &#x201C;vm_&#x201D;
+ will be renamed to &#x201C;uvm_&#x201D;.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="SEE_ALSO"><a class="permalink" href="#SEE_ALSO">SEE
+ ALSO</a></h1>
+<p class="Pp"><a class="Xr">swapctl(2)</a>, <a class="Xr">getloadavg(3)</a>,
+ <a class="Xr">kvm(3)</a>, <a class="Xr">sysctl(3)</a>,
+ <a class="Xr">ddb(4)</a>, <a class="Xr">options(4)</a>,
+ <a class="Xr">memoryallocators(9)</a>, <a class="Xr">pmap(9)</a>,
+ <a class="Xr">ubc(9)</a>, <a class="Xr">uvm_km(9)</a>,
+ <a class="Xr">uvm_map(9)</a></p>
+<p class="Pp"><cite class="Rs"><span class="RsA">Charles D. Cranor</span> and
+ <span class="RsA">Gurudatta M. Parulkar</span>, <span class="RsT">The UVM
+ Virtual Memory System</span>, <i class="RsB">Proceedings of the USENIX
+ Annual Technical Conference</i>, <i class="RsI">USENIX Association</i>,
+ <a class="RsU" href="http://www.usenix.org/event/usenix99/full_papers/cranor/cranor.pdf">http://www.usenix.org/event/usenix99/full_papers/cranor/cranor.pdf</a>,
+ <span class="RsP">117-130</span>, <span class="RsD">June 6-11,
+ 1999</span>.</cite></p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="HISTORY"><a class="permalink" href="#HISTORY">HISTORY</a></h1>
+<p class="Pp">UVM is a new VM system developed at Washington University in St.
+ Louis (Missouri). UVM's roots lie partly in the Mach-based
+ <span class="Ux">4.4BSD</span> VM system, the
+ <span class="Ux">FreeBSD</span> VM system, and the SunOS 4 VM system. UVM's
+ basic structure is based on the <span class="Ux">4.4BSD</span> VM system.
+ UVM's new anonymous memory system is based on the anonymous memory system
+ found in the SunOS 4 VM (as described in papers published by Sun
+ Microsystems, Inc.). UVM also includes a number of features new to
+ <span class="Ux">BSD</span> including page loanout, map entry passing,
+ simplified copy-on-write, and clustered anonymous memory pageout. UVM is
+ also further documented in an August 1998 dissertation by Charles D.
+ Cranor.</p>
+<p class="Pp">UVM appeared in <span class="Ux">NetBSD 1.4</span>.</p>
+</section>
+<section class="Sh">
+<h1 class="Sh" id="AUTHORS"><a class="permalink" href="#AUTHORS">AUTHORS</a></h1>
+<p class="Pp"><span class="An">Charles D. Cranor</span>
+ &lt;<a class="Mt" href="mailto:chuck@ccrc.wustl.edu">chuck@ccrc.wustl.edu</a>&gt;
+ designed and implemented UVM.</p>
+<p class="Pp"><span class="An">Matthew Green</span>
+ &lt;<a class="Mt" href="mailto:mrg@eterna23.net">mrg@eterna23.net</a>&gt;
+ wrote the swap-space management code and handled the logistical issues
+ involved with merging UVM into the <span class="Ux">NetBSD</span> source
+ tree.</p>
+<p class="Pp"><span class="An">Chuck Silvers</span>
+ &lt;<a class="Mt" href="mailto:chuq@chuq.com">chuq@chuq.com</a>&gt;
+ implemented the aobj pager, thus allowing UVM to support System V shared
+ memory and process swapping.</p>
+</section>
+</div>
+<table class="foot">
+ <tr>
+ <td class="foot-date">March 23, 2015</td>
+ <td class="foot-os">NetBSD 10.1</td>
+ </tr>
+</table>