Lines Matching defs:tasks
93 * Targeted preemption latency for CPU-bound tasks:
122 * Minimal preemption granularity for CPU-bound tasks:
519 * both tasks until we find their ancestors who are siblings of common
931 * 2) from those tasks that meet 1), we select the one
1129 * Tasks are initialized with full load to be seen as heavy tasks until
1141 * With new tasks being created, their initial util_avgs are extrapolated
1150 * To solve this problem, we also cap the util_avg of successive tasks to
1176 * For !fair tasks do:
1335 * Are we enqueueing a waiting task? (for current tasks
1415 * threshold. Above this threshold, individual tasks may be contending
1417 * approximation as the number of running tasks may not be related to
1425 * tasks that remain local when the destination is lightly loaded.
1437 * calculated based on the tasks virtual memory size and
1455 spinlock_t lock; /* nr_tasks, tasks */
1716 * of nodes, and move tasks towards the group with the most
1752 * larger multiplier, in order to group tasks together that are almost
2016 /* The node has spare capacity that can be used to run more tasks. */
2019 * The node is fully used and the tasks don't compete for more CPU
2020 * cycles. Nevertheless, some tasks might wait before running.
2025 * tasks.
2235 * be improved if the source tasks was migrated to the target dst_cpu taking
2295 * be incurred if the tasks were swapped.
2297 * If dst and source tasks are in the same NUMA group, or not
2303 * Do not swap within a group or between tasks that have
2315 * tasks within a group over tiny differences.
2363 * of tasks and also hurt performance due to cache
2443 * more running tasks that the imbalance is ignored as the
2466 * than swapping tasks around, check if a move is possible.
2508 * imbalance and would be the first to start moving tasks about.
2510 * And we want to avoid any moving of tasks about, as that would create
2511 * random movement of tasks -- counter the numa conditions we're trying
2732 * Most memory accesses are shared with other tasks.
2734 * since other tasks may just move the memory elsewhere.
2825 * tasks from numa_groups near each other in the system, and
2928 * Normalize the faults_from, so all tasks in a group
3387 * Scanning the VMA's of short lived tasks add more overhead. So
3450 * Make sure tasks use at least 32x as much time to run other code
3971 * on an 8-core system with 8 tasks each runnable on one CPU shares has
4022 * number include things like RT tasks.
4209 * Imagine a rq with 2 tasks that each are runnable 2/3 of the time. Then the
4211 * runnable section of these tasks overlap (or not). If they were to perfectly
4323 * assuming all tasks are equally runnable.
4527 * avg. The immediate corollary is that all (fair) tasks must be attached.
4566 * a lot of tasks with the rounding problem between 2 updates of
4759 * tasks cannot exit without having gone through wake_up_new_task() ->
5206 * adding tasks with positive lag, or removing tasks with negative lag
5208 * other tasks.
5283 * When joining the competition; the exisiting tasks will be,
5284 * on average, halfway through their slice, as such start tasks
5469 * when there are only lesser-weight tasks around):
6652 * CFS operations on tasks:
6722 /* Runqueue only has SCHED_IDLE tasks enqueued */
6752 * In case of simultaneous wakeup from idle, the latency sensitive tasks
6753 * lost opportunity to preempt non sensitive tasks which woke up
6837 * Since new tasks are assigned an initial util_avg equal to
6838 * half of the spare capacity of their CPU, tiny tasks have the
6842 * for the first enqueue operation of new tasks during the
6846 * the PELT signals of tasks to converge before taking them
6933 /* balance early to pull high priority tasks */
6972 * The load of a CPU is defined by the load of tasks currently enqueued on that
6973 * CPU as well as tasks which are currently sleeping after an execution on that
7029 * Only decay a single time; tasks that have less then 1 wakeup per
7091 * interrupt intensive workload could force all tasks onto one
7647 * kworker thread and the tasks previous CPUs are the same.
7715 * cpu_util() - Estimates the amount of CPU capacity used by CFS tasks.
7724 * CPU utilization is the sum of running time of runnable tasks plus the
7725 * recent utilization of currently non-runnable tasks on that CPU.
7726 * It represents the amount of CPU capacity currently used by CFS tasks in
7732 * runnable tasks on that CPU. It preserves a utilization "snapshot" of
7733 * previously-executed tasks, which helps better deduce how busy a CPU will
7738 * CPU contention for CFS tasks can be detected by CPU runnable > CPU
7745 * of rounding errors as well as task migrations or wakeups of new tasks.
7845 * The utilization of a CPU is defined by the utilization of tasks currently
7846 * enqueued on that CPU as well as tasks which are currently sleeping after an
7858 * WALT does not decay idle tasks in the same manner
7995 * NOTE: in case RT tasks are running, by default the
8041 * small tasks on a CPU in order to let other CPUs go in deeper idle states,
8057 * bias new tasks towards specific types of CPUs first, or to try to infer
8423 * By using 'se' instead of 'curr' we penalize light tasks, so
8521 /* Idle tasks are by definition preempted by non-idle tasks. */
8527 * Batch and idle tasks do not preempt non-idle tasks (their preemption
8842 * We them move tasks around to minimize the imbalance. In the continuous
8871 * Coupled with a limit on how many tasks we can migrate every balance pass,
8939 /* The group has spare capacity that can be used to run more tasks. */
8942 * The group is fully used and the tasks don't compete for more CPU
8943 * cycles. Nevertheless, some tasks might wait before running.
8963 * The tasks' affinity constraints previously prevented the scheduler
8969 * tasks.
9013 struct list_head tasks;
9131 * We do not migrate tasks that are:
9154 * meet load balance goals by pulling other tasks on src_cpu.
9270 * detach_tasks() -- tries to detach up to imbalance load/util/tasks from
9273 * Returns number of detached tasks if successful and 0 otherwise.
9277 struct list_head *tasks = &env->src_rq->cfs_tasks;
9305 while (!list_empty(tasks)) {
9322 /* take a breather every nr_migrate tasks */
9329 p = list_last_entry(tasks, struct task_struct, se.group_node);
9337 * Depending of the number of CPUs and tasks and the
9341 * detaching up to loop_max tasks.
9384 list_add(&p->se.group_node, &env->tasks);
9400 * load/util/tasks.
9407 list_move(&p->se.group_node, tasks);
9412 tasks = &env->src_rq->cfs_tasks;
9456 * attach_tasks() -- attaches all tasks detached by detach_tasks() to their
9461 struct list_head *tasks = &env->tasks;
9468 while (!list_empty(tasks)) {
9469 p = list_first_entry(tasks, struct task_struct, se.group_node);
9692 unsigned int sum_nr_running; /* Nr of tasks running in the group */
9693 unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
9716 unsigned int prefer_sibling; /* tasks should go to sibling first */
9883 * Imagine a situation of two groups of 4 CPUs each and 4 tasks each with a
9890 * If we were to balance group-wise we'd place two tasks in the first group and
9891 * two tasks in the second group. Clearly this is undesired as it will overload
9896 * moving tasks due to affinity constraints.
9915 * be used by some tasks.
9918 * available capacity for CFS tasks.
9920 * account the variance of the tasks' load and to return true if the available
9943 * group_is_overloaded returns true if the group has more tasks than it can
9946 * with the exact right number of tasks, has no more spare capacity but is not
10072 * to a CPU that doesn't have multiple tasks sharing its CPU capacity.
10258 * Don't try to pull misfit tasks we can't help.
10323 * group because tasks have all compute capacity that they need
10362 * and highest number of running tasks. We could also compare
10365 * CPUs which means less opportunity to pull tasks.
10378 * per-CPU capacity. Migrating tasks to less capable CPUs may harm
10641 /* There is no idlest group to push tasks to */
10686 * idlest group don't try and push any tasks.
10726 * and improve locality if the number of running tasks
10883 * Indicate that the child domain of the busiest group prefers tasks
10928 /* Set imbalance to allow misfit tasks to be balanced. */
10953 /* Reduce number of tasks sharing CPU capacity */
11007 * When prefer sibling, evenly spread running tasks on
11032 /* Number of tasks to move to restore balance */
11053 * busiest group don't try to pull any tasks.
11065 * load, don't try to pull any tasks.
11134 /* There is no busy sibling group to pull tasks from */
11140 /* Misfit tasks should be dealt with regardless of the avg load */
11166 * don't try and pull any tasks.
11173 * between tasks.
11178 * busiest group don't try to pull any tasks.
11188 * Don't pull any tasks if this group is already above the
11204 * Try to move all excess tasks to a sibling domain of the busiest
11243 * busiest doesn't have any tasks waiting to run
11280 * - regular: there are !numa tasks
11281 * - remote: there are numa tasks that run on the 'wrong' node
11284 * In order to avoid migrating ideally placed numa tasks,
11289 * queue by moving tasks around inside the node.
11293 * allow migration of more tasks.
11321 * Make sure we only pull tasks from a CPU of lower priority
11391 * For ASYM_CPUCAPACITY domains with misfit tasks we
11408 * Max backoff if we encounter pinned tasks. Pretty arbitrary value, but
11417 * ASYM_PACKING needs to force migrate tasks from busy but lower
11418 * priority CPUs in order to pack all tasks in the highest priority
11438 * The imbalanced case includes the case of pinned tasks preventing a fair
11507 * However, we bail out if we already have tasks or a wakeup pending,
11558 * tasks if there is an imbalance.
11579 .tasks = LIST_HEAD_INIT(env.tasks),
11616 * Attempt to move tasks. If find_busiest_group has found
11634 * We've detached some tasks from busiest_rq. Every
11637 * that nobody can manipulate the tasks in parallel.
11652 /* Stop if we tried all running tasks */
11658 * Revisit (affine) tasks on src_cpu that couldn't be moved to
11704 /* All tasks on this runqueue were pinned by CPU affinity */
11788 * constraints. Clear the imbalance flag only if other tasks got
11800 * We reach balance because all tasks are pinned at this level so
11868 * running tasks off the busiest CPU onto idle CPUs. It requires at
11889 * CPUs can become inactive. We should not move tasks from or to
12091 * state even if we migrated tasks. Update it.
12267 * currently idle; in which case, kick the ILB to move tasks
12296 * ensure tasks have enough CPU capacity.
12312 * like this LLC domain has tasks we could move.
12458 * tasks movement depending of flags.
12645 * idle. Attempts to pull tasks from other CPUs.
12648 * < 0 - we released the lock and there are !fair tasks present
12649 * 0 - failed, no new tasks
12650 * > 0 - success, new (fair) tasks present
12679 * Do not pull tasks towards !active CPUs...
12736 * Stop searching for tasks to pull if there are
12737 * now runnable tasks on this rq.
12962 * tasks on this CPU and the forced idle CPU. Ideally, we should
12965 * MIN_NR_TASKS_DURING_FORCEIDLE - 1 tasks and use that to check
13016 * Find an se in the hierarchy for tasks a and b, such that the se's
13579 * Time slice is 0 for SCHED_OTHER tasks that are on an otherwise
13756 * We remove the throttled cfs_rq's tasks's contribution from the