Lines Matching defs:shadow
113 * Poisons the shadow memory for 'size' bytes starting from 'addr'.
121 * Perform shadow offset calculation based on untagged address, as
138 * Perform shadow offset calculation based on untagged address, as
147 u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size);
150 *shadow = tag;
152 *shadow = size & KASAN_SHADOW_MASK;
639 * If shadow is mapped already than it must have been mapped
664 * In the latter case we can use vfree() to free shadow.
668 * Currently it's not possible to free shadow mapped
755 * STORE shadow(a), unpoison_val
757 * STORE shadow(a+99), unpoison_val x = LOAD p
759 * STORE p, a LOAD shadow(x+99)
761 * If there is no barrier between the end of unpoisioning the shadow
764 * poison in the shadow.
770 * get_vm_area() and friends, the caller gets shadow allocated but
780 * Poison the shadow for a vmalloc region. Called as part of the
830 * That might not map onto the shadow in a way that is page-aligned:
840 * |??AAAAAA|AAAAAAAA|AA??????| < shadow
844 * shadow of the region aligns with shadow page boundaries. In the
845 * example, this gives us the shadow page (2). This is the shadow entirely
849 * partially covered shadow pages - (1) and (3) in the example. For this,
862 * |FFAAAAAA|AAAAAAAA|AAF?????| < shadow
866 * the free region down so that the shadow is page aligned. So we can free
889 * means that so long as we are careful with alignment and only free shadow