Lines Matching defs:bitmap
485 * Check if MSR is intercepted for L01 MSR bitmap.
519 * have the write-low and read-high bitmap offsets the wrong way round.
561 * Merge L0's and L1's MSR bitmap, return false to indicate that
572 /* Nothing to do if the MSR bitmap is not in use. */
595 * directly from the L1 bitmap.
634 * Checking the L0->L1 bitmap is trying to verify two things:
637 * ensures that we do not accidentally generate an L02 MSR bitmap
638 * from the L12 MSR bitmap that is too permissive.
640 * unnecessarily merging of the bitmap if the MSR is unused. This
641 * works properly because we only update the L01 MSR bitmap lazily.
642 * So even if L0 should pass L1 these MSRs, the L01 bitmap is only
2286 * we do not have access to L1's MSR bitmap yet. For now, keep
5655 gpa_t bitmap, last_bitmap;
5663 bitmap = vmcs12->io_bitmap_a;
5665 bitmap = vmcs12->io_bitmap_b;
5668 bitmap += (port & 0x7fff) / 8;
5670 if (last_bitmap != bitmap)
5671 if (kvm_vcpu_read_guest(vcpu, bitmap, &b, 1))
5678 last_bitmap = bitmap;
5706 * MSR bitmap. This may be the case even when L0 doesn't use MSR bitmaps.
5713 gpa_t bitmap;
5723 bitmap = vmcs12->msr_bitmap;
5725 bitmap += 2048;
5728 bitmap += 1024;
5731 /* Then read the msr_index'th bit from this bitmap: */
5734 if (kvm_vcpu_read_guest(vcpu, bitmap + msr_index/8, &b, 1))
5817 struct vmcs12 *vmcs12, gpa_t bitmap)
5834 if (kvm_vcpu_read_guest(vcpu, bitmap + field/8, &b, 1))
6032 * the XSS exit bitmap in vmcs12.
6517 * hardware. For example, L1 can specify an MSR bitmap - and we