Lines Matching defs:dirty

1006  * Allocation size is twice as large as the actual dirty bitmap size.
1446 /* Allocate/free page dirty bitmap as needed */
1496 * kvm_get_dirty_log - get a snapshot of dirty pages
1499 * @is_dirty: set to '1' if any dirty pages were found
1541 * kvm_get_dirty_log_protect - get a snapshot of dirty pages
1542 * and reenable dirty page tracking for the corresponding pages.
1547 * concurrently. So, to avoid losing track of dirty pages we keep the
1556 * entry. This is not a problem because the page is reported dirty using
1630 * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot
1634 * Steps 1-4 below provide general overview of dirty page logging. See
1638 * always flush the TLB (step 4) even if previous step failed and the dirty
1640 * does not preclude user space subsequent dirty log read. Flushing TLB ensures
1641 * writes will be marked dirty for next log read.
1662 * kvm_clear_dirty_log_protect - clear dirty bits in the bitmap
1663 * and reenable dirty page tracking for the corresponding pages.
1665 * @log: slot id and address from which to fetch the bitmap of dirty pages
2211 void kvm_release_pfn(kvm_pfn_t pfn, bool dirty, struct gfn_to_pfn_cache *cache)
2219 if (dirty)
2228 kvm_release_pfn(cache->pfn, cache->dirty, cache);
2232 cache->dirty = false;
2309 bool dirty, bool atomic)
2330 if (dirty)
2334 cache->dirty |= dirty;
2336 kvm_release_pfn(map->pfn, dirty, NULL);
2343 struct gfn_to_pfn_cache *cache, bool dirty, bool atomic)
2346 cache, dirty, atomic);
2351 void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty)
2354 dirty, false);
2405 * touched (e.g. set dirty) except by its owner".