Lines Matching defs:page

47 #include <asm/page.h>
91 * where the hardware walks 2 page tables:
517 * Flush TLB when accessed/dirty states are changed in the page tables,
518 * to guarantee consistency between TLB and page tables.
556 * KVM does not hold the refcount of the page used by
557 * kvm mmu, before reclaiming the page, we should
617 * Capture the dirty status of the page, so that it doesn't get
633 * Prevent page table teardown by making any free-er wait during
650 * OUTSIDE_GUEST_MODE and proceed to free the shadow page table.
713 pr_err_ratelimited("gfn mismatch under direct page %llx "
720 * Return the pointer to the large page information for a given gfn,
721 * handling slots that are not large page aligned.
1095 * spte write-protection is caused by protecting shadow page table.
1102 * shadow page.
1215 * Used when we do not need to care about huge page mappings: e.g. during dirty
1239 * protect the page if the D-bit isn't supported.
1274 * Used when we do not need to care about huge page mappings: e.g. during dirty
1476 * {gfn(page) | page intersects with [hva_start, hva_end)} =
1720 } page[KVM_PAGE_ARRAY_NR];
1731 if (pvec->page[i].sp == sp)
1734 pvec->page[pvec->nr].sp = sp;
1735 pvec->page[pvec->nr].idx = idx;
1910 i < pvec.nr && ({ sp = pvec.page[i].sp; 1;}); \
1920 struct kvm_mmu_page *sp = pvec->page[n].sp;
1921 unsigned idx = pvec->page[n].idx;
1943 WARN_ON(pvec->page[0].idx != INVALID_INDEX);
1945 sp = pvec->page[0].sp;
2066 /* The page is good, but __kvm_sync_page might still end
2096 * otherwise the content of the synced shadow page may
2097 * be inconsistent with guest page table.
2323 * the active page list. See list_del() in the "else" case of
2333 * Remove the active root from the active page list, the root
2373 * the page tables and see changes to vcpu->mode here. The barrier
2378 * guest mode and/or lockless shadow page table walks.
2403 * Don't zap active root pages, the page itself can't be freed
2523 * before the page had been marked as unsync-ed, something like the
2531 * (GPTE being in the guest page table shadowed
2533 * This reads SPTE during the page table walk.
2603 * If we overwrite a PTE page pointer with a 2MB PMD, unlink
2678 struct page *pages[PTE_PREFETCH_NUM];
2760 * page fault steps have already verified the guest isn't writing a
2873 * We cannot overwrite existing page tables with an NX
2874 * large page, as the leaf could be executable.
2917 * caused mmio page fault and treat it as mmio access.
2951 * need to be updated by slow page fault path.
2956 /* See if the page fault is due to an NX violation */
2963 * 1. The shadow page table entry is not present, which could mean that
2965 * 2. The shadow page table entry is present and the fault
2970 * page must be a genuine page fault where we have to create a new SPTE.
2972 * accesses to a present page.
3090 * we only dirty the first page into the dirty-bitmap in
3094 * Instead, we let the slow page fault path create a
3109 * Currently, fast page fault only works for direct mapping
3110 * since the gfn is not stable for indirect shadow page. See
3288 * Do we shadow a long mode page table? If so we need to
3289 * write-protect the guests page table root.
3303 * We shadow a 32 bit page table. This may be a legacy 2-level
3304 * or a PAE 3-level page table. In either case we need to be aware that
3305 * the shadow page table may be a PAE or a long mode page table.
3312 * Allocate the page for the PDPTEs when shadowing 32-bit NPT
3402 * simultaneously, any guest page table changes are not
3406 * changes to the page tables are made. The comments in
3476 * page tables, because cr2 is a nGPA while the cache stores GPAs.
3596 * If the page table is zapped by other cpus, let CPU fault again on
3613 * guest is writing the page which is write tracked which can
3614 * not be fixed by page fault handler.
3669 * Retry the page fault if the gfn hit a memslot that is being deleted
3686 return false; /* *pfn has correct page already */
3900 * It's possible that the cached previous root page is obsolete because
3916 * the shadow page tables.
3921 * If this is a direct root page, it doesn't have a write flooding
4018 /* no rsvd bits for 2 level 4K page table entries */
4030 /* 36bits PSE 4MB page */
4033 /* 32 bits PSE 4MB page */
4046 rsvd_bits(13, 20); /* large page */
4075 rsvd_bits(13, 20); /* large page */
4109 /* large page */
4138 * the page table on host is the shadow page table for the page
4187 * the direct page table on host, use as much mmu features as
4221 * is the shadow page table for intel nested guest.
4295 * - A user page is accessed
4316 * key violations are reported through a bit in the page fault error code.
4320 * CR0, EFER, CPL), and on other bits of the error code and the page tables.
4323 * page tables and the machine state:
4326 * - PK is always zero if U=0 in the page tables
4330 * code (minus the P bit) and the page table's U bit form an index into the
4370 * instruction fetch and is to a user page.
4764 * L2 page tables are never shadowed, so there is no need to sync
4771 * L1's nested page tables (e.g. EPT12). The nested translation
4773 * L2's page tables as the first level of translation and L1's
4774 * nested page tables as the second level of translation. Basically
4891 * Assume that the pte write on a page table of the same type
4911 * If we're seeing too many writes to a page, it may no longer be a page table,
4912 * or we may be forking, in which case it is better to unmap the page.
4918 * it can become unsync, then the guest page is not write-protected.
4929 * indicate a page is not used as a page table.
4998 * If we don't have indirect shadow pages, it means no page is
5096 * was due to a RO violation while translating the guest page.
5098 * paging in both guests. If true, we simply unprotect the page
5109 * optimistically try to just unprotect the page and let the processor
5110 * re-execute the instruction that caused the page fault. Do not allow
5115 * explicitly shadowing L1's page tables, i.e. unprotecting something
5217 * the kernel is not. But, KVM never creates a page size greater than
5310 struct page *page;
5333 page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_DMA32);
5334 if (!page)
5337 mmu->pae_root = page_address(page);
5386 * No obsolete valid page exists before a newly created page
5394 * pages. Skip the bogus page, otherwise we'll get stuck in an
5395 * infinite loop if the page gets put back on the list (again).
5421 * Trigger a remote TLB flush before freeing the page tables to ensure
5422 * KVM is not in the middle of a lockless shadow page table walk, which
5454 * Notify all vcpus to reload its shadow page table and flush TLB.
5455 * Then all vcpus will switch to new shadow page table with the new
5459 * otherwise, vcpu would purge shadow page but miss tlb flush.
5586 * We cannot do huge page mapping for indirect shadow pages,
5588 * tdp; such shadow pages are synced with the page table in
5589 * the guest, and the guest page table is using 4K page size
5825 * Set a reserved PA bit in MMIO SPTEs to generate page faults with