Searched refs:set_spte (Results 1 - 2 of 2) sorted by relevance
/kernel/linux/linux-5.10/arch/x86/kvm/mmu/ |
H A D | paging_tmpl.h | 210 * on supported processors. Therefore, set_spte does not automatically 1081 set_spte_ret |= set_spte(vcpu, &sp->spt[i], in sync_page()
|
H A D | mmu.c | 2560 static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, in set_spte() function 2622 set_spte_ret = set_spte(vcpu, sptep, pte_access, level, gfn, pfn, in mmu_set_spte() 2995 * set_spte. But fast_page_fault is very unlikely to happen with PML in fast_pf_fix_direct_spte() 3002 * Compare with set_spte where instead shadow_dirty_mask is set. in fast_pf_fix_direct_spte()
|
Completed in 10 milliseconds