Lines Matching defs:state
3 * coupled.c - helper functions to enter the same idle state on multiple cpus
27 * will corrupt the gic state unless the other cpu runs a work
28 * around). Each cpu has a power state that it can enter without
32 * sometimes the whole SoC). Entering a coupled power state must
36 * WFI state until all cpus are ready to enter a coupled state, at
37 * which point the coupled state function will be called on all
44 * power state enter function at the same time. During this pass,
50 * requested_state stores the deepest coupled idle state each cpu
57 * state are no longer updating it.
62 * the waiting loop, in the ready loop, or in the coupled idle state.
64 * or in the coupled idle state.
73 * Set struct cpuidle_device.safe_state to a state that is not a
74 * coupled state. This is usually WFI.
77 * state that affects multiple cpus.
79 * Provide a struct cpuidle_state.enter function for each state
88 * struct cpuidle_coupled - data for set of cpus that share a coupled idle state
144 * Must only be called from within a coupled idle state handler
145 * (state.enter when state.flags has CPUIDLE_FLAG_COUPLED set).
169 * cpuidle_state_is_coupled - check if a state is part of a coupled set
171 * @state: index of the target state in drv->states
173 * Returns true if the target state is coupled with cpus besides this one
175 bool cpuidle_state_is_coupled(struct cpuidle_driver *drv, int state)
177 return drv->states[state].flags & CPUIDLE_FLAG_COUPLED;
184 * Returns 0 for valid state values, a negative error code otherwise:
185 * * -EINVAL if any coupled state(safe_state_index) is wrongly set.
220 * state.
254 * Returns true if all cpus coupled to this target state are in the ready loop
266 * Returns true if all cpus coupled to this target state are in the wait loop
287 * cpuidle_coupled_get_state - determine the deepest idle state
291 * Returns the deepest idle state that all coupled cpus can enter
297 int state = INT_MAX;
307 if (cpu_online(i) && coupled->requested_state[i] < state)
308 state = coupled->requested_state[i];
310 return state;
324 * Ensures that the target cpu exits it's waiting idle state (if it is in it)
326 * state.
361 * @next_state: the index in drv->states of the requested state for this cpu
363 * Updates the requested idle state for the specified cpuidle device.
383 * Removes the requested idle state for the specified cpuidle device.
452 * cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
455 * @next_state: index of the requested state in drv->states
457 * Coordinate with coupled cpus to enter the target state. This is a two
461 * go to an intermediate state (the cpuidle_device's safe state), and wait for
499 * If this is the last cpu to enter the waiting state, poke
500 * all the other cpus out of their waiting state so they can
501 * enter a deeper state. This can race with one of the cpus
502 * exiting the waiting state due to an interrupt and
512 * Wait for all coupled cpus to be idle, using the deepest state
580 * controller when entering the deep idle state. It's not possible to
583 * coupled idle state of all cpus and retry.
592 /* all cpus have acked the coupled state */
607 * all other cpus will loop back into the safe idle state instead of
618 * a cpu exits and re-enters the ready state because this cpu has
703 * cpuidle_coupled_prevent_idle - prevent cpus from entering a coupled state
704 * @coupled: the struct coupled that contains the cpu that is changing state
722 * cpuidle_coupled_allow_idle - allows cpus to enter a coupled state
723 * @coupled: the struct coupled that contains the cpu that is changing state