Lines Matching refs:range
84 * The unused vmemmap range, which was not yet memset(PAGE_UNUSED) ranges
112 * We only optimize if the new used range directly follows the
113 * previously unused range (esp., when populating consecutive sections).
140 * unused range in the populated PMD.
471 * Add a physical memory range to the 1:1 mapping.
480 * Remove a physical memory range from the 1:1 mapping.
520 struct range arch_get_mappable_range(void)
522 struct range mhp_range;
531 struct range range = arch_get_mappable_range();
534 if (start < range.start ||
535 start + size > range.end + 1 ||