Lines Matching refs:ASID
50 pr_warn("CPU%d: Unknown ASID size (%d); assuming 8-bit\n",
70 * We cannot decrease the ASID size at runtime, so panic if we support
71 * fewer ASID bits than the boot CPU.
73 pr_crit("CPU%d: smaller ASID size(%u) than boot CPU (%u)\n",
85 * is set, then the ASID will map only userspace. Thus
109 /* Update the list of reserved ASIDs and the ASID bitmap. */
118 * ASID, as this is the only trace we have of
142 * (i.e. the same ASID in the current generation) but we can't
144 * of the old ASID are updated to reflect the mm. Failure to do
145 * so could result in us missing the reserved ASID in a future
168 * If our current ASID was active during a rollover, we
183 * We had a valid ASID in a previous life, so try to re-use
191 * Allocate a free ASID. If we can't find one, take a note of the
193 * always count from ASID #2 (index 1), as we use ASID #0 when setting
228 * If our active_asids is non-zero and the ASID matches the current
236 * - We get a valid ASID back from the cmpxchg, which means the
247 /* Check that our ASID belongs to the current generation. */
295 * We went through one or more rollover since that ASID was
352 unsigned long asid = ASID(mm);
355 /* Skip CNP for the reserved ASID */
359 /* SW PAN needs a copy of the ASID in TTBR0 for entry */
363 /* Set ASID in TTBR1 since TCR.A1 is set */
385 * one more ASID than CPUs. ASID #0 is reserved for init_mm.
388 pr_info("ASID allocator initialised with %lu entries\n",
392 * There must always be an ASID available after rollover. Ensure that,
393 * even if all CPUs have a reserved ASID and the maximum number of ASIDs
394 * are pinned, there still is at least one empty slot in the ASID map.
416 * and reserve kernel ASID's from beginning.