Lines Matching refs:tasks
63 * Targeted preemption latency for CPU-bound tasks:
92 * Minimal preemption granularity for CPU-bound tasks:
459 * both tasks until we find their ancestors who are siblings of common
720 * When there are too many tasks (sched_nr_latency) we have to stretch
798 * Tasks are initialized with full load to be seen as heavy tasks until
813 * With new tasks being created, their initial util_avgs are extrapolated
822 * To solve this problem, we also cap the util_avg of successive tasks to
863 * For !fair tasks do:
1062 * Are we enqueueing a waiting task? (for current tasks
1118 * calculated based on the tasks virtual memory size and
1133 spinlock_t lock; /* nr_tasks, tasks */
1396 * of nodes, and move tasks towards the group with the most
1434 * larger multiplier, in order to group tasks together that are almost
1552 /* The node has spare capacity that can be used to run more tasks. */
1555 * The node is fully used and the tasks don't compete for more CPU
1556 * cycles. Nevertheless, some tasks might wait before running.
1561 * tasks.
1794 * be improved if the source tasks was migrated to the target dst_cpu taking
1856 * be incurred if the tasks were swapped.
1858 * If dst and source tasks are in the same NUMA group, or not
1866 * tasks within a group over tiny differences.
1916 * of tasks and also hurt performance due to cache
1998 * more running tasks that the imbalance is ignored as the
2019 * than swapping tasks around, check if a move is possible.
2064 * imbalance and would be the first to start moving tasks about.
2066 * And we want to avoid any moving of tasks about, as that would create
2067 * random movement of tasks -- counter the numa conditions we're trying
2297 * Most memory accesses are shared with other tasks.
2299 * since other tasks may just move the memory elsewhere.
2393 * tasks from numa_groups near each other in the system, and
2501 * Normalize the faults_from, so all tasks in a group
2961 * Make sure tasks use at least 32x as much time to run other code
3345 * on an 8-core system with 8 tasks each runnable on one CPU shares has
3403 * number include things like RT tasks.
3540 * Imagine a rq with 2 tasks that each are runnable 2/3 of the time. Then the
3542 * runnable section of these tasks overlap (or not). If they were to perfectly
3642 * assuming all tasks are equally runnable.
3764 * avg. The immediate corollary is that all (fair) tasks must be attached, see
3801 * a lot of tasks with the rounding problem between 2 updates of
4005 * tasks cannot exit without having gone through wake_up_new_task() ->
4336 * The 'current' period is already promised to the current tasks,
4435 * fairness detriment of existing tasks.
4649 * when there are only lesser-weight tasks around):
5685 * CFS operations on tasks:
5757 /* Runqueue only has SCHED_IDLE tasks enqueued */
5864 * Since new tasks are assigned an initial util_avg equal to
5865 * half of the spare capacity of their CPU, tiny tasks have the
5869 * for the first enqueue operation of new tasks during the
5873 * the PELT signals of tasks to converge before taking them
5972 /* balance early to pull high priority tasks */
6010 * The load of a CPU is defined by the load of tasks currently enqueued on that
6011 * CPU as well as tasks which are currently sleeping after an execution on that
6064 * Only decay a single time; tasks that have less then 1 wakeup per
6127 * interrupt intensive workload could force all tasks onto one
6674 * Amount of capacity of a CPU that is (estimated to be) used by CFS tasks
6681 * cfs_rq.avg.util_avg is the sum of running time of runnable tasks plus the
6682 * recent utilization of currently non-runnable tasks on a CPU. It represents
6691 * cfs_rq.avg.util_avg and the sum of the estimated utilization of the tasks
6696 * describing the potential for other tasks waking up on the same CPU.
6700 * cfs.avg.util_avg or just after migrating tasks and new task wakeups until
6739 * The utilization of a CPU is defined by the utilization of tasks currently
6740 * enqueued on that CPU as well as tasks which are currently sleeping after an
6754 * WALT does not decay idle tasks in the same manner
6790 * b) if other tasks are SLEEPING on this CPU, which is now exiting
6797 * c) if other tasks are RUNNABLE on that CPU and
6801 * considering the expected utilization of tasks already
6941 * NOTE: in case RT tasks are running, by default the
6969 * small tasks on a CPU in order to let other CPUs go in deeper idle states,
6985 * bias new tasks towards specific types of CPUs first, or to try to infer
7195 * As blocked tasks retain absolute vruntime the migration needs to
7288 * By using 'se' instead of 'curr' we penalize light tasks, so
7410 /* Idle tasks are by definition preempted by non-idle tasks. */
7416 * Batch and idle tasks do not preempt non-idle tasks (their preemption
7719 * We them move tasks around to minimize the imbalance. In the continuous
7748 * Coupled with a limit on how many tasks we can migrate every balance pass,
7816 /* The group has spare capacity that can be used to run more tasks. */
7819 * The group is fully used and the tasks don't compete for more CPU
7820 * cycles. Nevertheless, some tasks might wait before running.
7835 * The tasks' affinity constraints previously prevented the scheduler
7841 * tasks.
7880 struct list_head tasks;
7999 * We do not migrate tasks that are:
8024 * meet load balance goals by pulling other tasks on src_cpu.
8138 * detach_tasks() -- tries to detach up to imbalance load/util/tasks from
8141 * Returns number of detached tasks if successful and 0 otherwise.
8145 struct list_head *tasks = &env->src_rq->cfs_tasks;
8166 while (!list_empty(tasks)) {
8175 p = list_last_entry(tasks, struct task_struct, se.group_node);
8183 /* take a breather every nr_migrate tasks */
8197 * Depending of the number of CPUs and tasks and the
8201 * detaching up to loop_max tasks.
8246 list_add(&p->se.group_node, &env->tasks);
8263 * load/util/tasks.
8271 list_move(&p->se.group_node, tasks);
8276 tasks = &env->src_rq->cfs_tasks;
8320 * attach_tasks() -- attaches all tasks detached by detach_tasks() to their
8325 struct list_head *tasks = &env->tasks;
8332 while (!list_empty(tasks)) {
8333 p = list_first_entry(tasks, struct task_struct, se.group_node);
8590 unsigned int sum_nr_running; /* Nr of tasks running in the group */
8591 unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
8613 unsigned int prefer_sibling; /* tasks should go to sibling first */
8777 * Imagine a situation of two groups of 4 CPUs each and 4 tasks each with a
8784 * If we were to balance group-wise we'd place two tasks in the first group and
8785 * two tasks in the second group. Clearly this is undesired as it will overload
8790 * moving tasks due to affinity constraints.
8809 * be used by some tasks.
8812 * available capacity for CFS tasks.
8814 * account the variance of the tasks' load and to return true if the available
8837 * group_is_overloaded returns true if the group has more tasks than it can
8840 * with the exact right number of tasks, has no more spare capacity but is not
9049 * Don't try to pull misfit tasks we can't help.
9108 * group because tasks have all compute capacity that they need
9123 * and highest number of running tasks. We could also compare
9126 * CPUs which means less opportunity to pull tasks.
9139 * per-CPU capacity. Migrating tasks to less capable CPUs may harm
9396 /* There is no idlest group to push tasks to */
9440 * idlest group don't try and push any tasks.
9559 /* Tag domain that child domain prefers tasks go to siblings first */
9595 * tasks that remain local when the source domain is almost idle.
9619 /* Set imbalance to allow misfit tasks to be balanced. */
9682 * When prefer sibling, evenly spread running tasks on
9720 * busiest group don't try to pull any tasks.
9799 /* There is no busy sibling group to pull tasks from */
9804 /* Misfit tasks should be dealt with regardless of the avg load */
9825 * don't try and pull any tasks.
9833 * between tasks.
9838 * busiest group don't try to pull any tasks.
9848 * Don't pull any tasks if this group is already above the
9864 /* Try to move all excess tasks to child's sibling domain */
9895 * busiest doesn't have any tasks waiting to run
9931 * - regular: there are !numa tasks
9932 * - remote: there are numa tasks that run on the 'wrong' node
9935 * In order to avoid migrating ideally placed numa tasks,
9940 * queue by moving tasks around inside the node.
9944 * allow migration of more tasks.
10027 * For ASYM_CPUCAPACITY domains with misfit tasks we
10043 * Max backoff if we encounter pinned tasks. Pretty arbitrary value, but
10051 * ASYM_PACKING needs to force migrate tasks from busy but
10052 * lower priority CPUs in order to pack all tasks in the
10149 * tasks if there is an imbalance.
10170 .tasks = LIST_HEAD_INIT(env.tasks),
10205 * Attempt to move tasks. If find_busiest_group has found
10224 * We've detached some tasks from busiest_rq. Every
10227 * that nobody can manipulate the tasks in parallel.
10246 * Revisit (affine) tasks on src_cpu that couldn't be moved to
10292 /* All tasks on this runqueue were pinned by CPU affinity */
10384 * constraints. Clear the imbalance flag only if other tasks got
10397 * We reach balance because all tasks are pinned at this level so
10467 * running tasks off the busiest CPU onto idle CPUs. It requires at
10488 * CPUs can become inactive. We should not move tasks from or to
10685 * state even if we migrated tasks. Update it.
10877 * currently idle; in which case, kick the ILB to move tasks
10903 * ensure tasks have enough CPU capacity.
10919 * like this LLC domain has tasks we could move.
11059 * tasks movement depending of flags.
11250 * idle. Attempts to pull tasks from other CPUs.
11253 * < 0 - we released the lock and there are !fair tasks present
11254 * 0 - failed, no new tasks
11255 * > 0 - success, new (fair) tasks present
11277 * Do not pull tasks towards !active CPUs...
11334 * Stop searching for tasks to pull if there are
11335 * now runnable tasks on this rq.
12053 * Time slice is 0 for SCHED_OTHER tasks that are on an otherwise
12202 * We remove the throttled cfs_rq's tasks's contribution from the