Lines Matching refs:tasks

288  * tasks, but still be able to sleep. We need this on platforms that cannot
346 * To keep the bandwidth of -deadline tasks under control
499 * Controls whether tasks of this cgroup should be colocated with each
500 * other and tasks of other cgroups that have the same flag turned on.
654 * leaf cfs_rqs are those that hold tasks (lowest schedulable entity in
761 * an rb-tree, ordered by tasks' deadlines, with caching
776 * Utilization of the tasks "assigned" to this runqueue (including
777 * the tasks that are in runqueue and the tasks that executed on this
878 * than one runnable -deadline task (as it is below for RT tasks).
931 * @value: utilization clamp value for tasks on this clamp bucket
932 * @tasks: number of RUNNABLE tasks on this clamp bucket
934 * Keep track of how many tasks are RUNNABLE for a given utilization
939 unsigned long tasks : BITS_PER_LONG - bits_per(SCHED_CAPACITY_SCALE);
947 * Keep track of RUNNABLE tasks on a rq to aggregate their clamp values.
956 * utilization required by its currently RUNNABLE tasks.
958 * maximum utilization allowed by its currently RUNNABLE tasks.
1009 /* Utilization clamp values based on CPU's RUNNABLE tasks */
1643 * Return the group to which this tasks belongs.
1824 * of tasks with abnormal "nice" values across CPUs the contribution that
1826 * scheduling class and "nice" value. For SCHED_NORMAL tasks this is just a
1846 * SAVE/RESTORE - an otherwise spurious dequeue/enqueue, done to ensure tasks
2065 * Tick may be needed by tasks in the runqueue depending on their policy and
2487 * and DL, though, because they may not be coming in if only RT tasks are
2488 * active all the time (or there are RT tasks only).
2493 * solutions targeted more specifically at RT tasks.
2567 * RUNNABLE tasks with _different_ clamps, we can end up with an