Lines Matching defs:page

53  * long will also starve other vCPUs. We have to also make sure that the page
86 * Get the maximum number of page-tables pages needed to split a range
136 /* Eager page splitting is best-effort. */
224 struct page *page = container_of(head, struct page, rcu_head);
225 void *pgtable = page_to_virt(page);
226 u32 level = page_private(page);
233 struct page *page = virt_to_page(addr);
235 set_page_private(page, (unsigned long)level);
236 call_rcu(&page->rcu_head, stage2_free_unlinked_table_rcu_cb);
251 struct page *p = virt_to_page(addr);
252 /* Dropping last refcount, the page will be freed */
299 * This is why right after unmapping a page/section and invalidating
308 * unmap_stage2_range -- Clear stage2 page table entries to unmap a range
349 * Go through the stage 2 page tables and invalidate any cache lines
370 * free_hyp_pgds - free Hyp-mode page tables
642 * page to allocate our VAs. If not, the check in
710 kvm_err("Cannot allocate hyp stack guard page\n");
715 * Since the stack grows downwards, map the stack to the page
716 * at the higher address and leave the lower guard page
720 * and addresses corresponding to the guard page have the
818 * teardown of the userspace page tables (which relies on
920 /* The eager page splitting is disabled by default */
1120 * Afterwards read of dirty page log can be called.
1273 * Check if the given hva is backed by a transparent huge page (THP) and
1303 * page. However, because we map the compound huge page and
1304 * not the individual tail page, we need to transfer the
1305 * refcount to the head page. We have to be careful that the
1316 * to PG_head and switch the pfn from a tail page to the head
1317 * page accordingly.
1328 /* Use page mapping if we cannot use block mapping. */
1362 * The page will be mapped in stage 2 as Normal Cacheable, so the VM will be
1363 * able to see the page's tags and therefore they must be initialised first. If
1368 * racing to santise the same page
1369 * - mmap_lock protects between a VM faulting a page in and the VMM performing
1376 struct page *page = pfn_to_page(pfn);
1381 for (i = 0; i < nr_pages; i++, page++) {
1382 if (try_page_mte_tagging(page)) {
1383 mte_clear_page_tags(page_address(page));
1384 set_page_mte_tagged(page);
1440 * Let's check if we will get back a huge page backed by hugetlbfs, or
1518 * If the page was identified as device early by looking at
1530 * Only actually map the page as writable if this was a write
1545 * If we are not forced to use page mapping, check if we are
1598 /* Mark the page dirty only if the fault is handled successfully */
1610 /* Resolve the access fault by making the page young again. */
1633 * guest simply needs more memory and we must allocate an appropriate page or it
1706 * The guest has put either its instructions or its page-tables
1742 * of the page size.
1793 * If the page isn't tagged, defer to user_mem_abort() for sanitising
1801 * We've moved a page around, probably through CoW, so let's treat
1890 * init code does not cross a page boundary.
1897 * currently configured page size and VA_BITS_MIN, in which case we will
1912 * 1 VA bits to assure that the hypervisor can both ID map its code page
1920 kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
1929 * The idmap page is intersecting with the VA space,
1939 kvm_err("Hyp mode page-table not allocated\n");
1998 * Free any leftovers from the eager page splitting cache. Do
2055 /* IO region dirty page logging not allowed */