Lines Matching defs:task
718 * The idea is to set a period in which each task runs once.
775 * We calculate the vruntime slice of a to-be-inserted task.
801 * nothing has been attached to the task group yet.
807 /* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
827 * where n denotes the nth task and cpu_scale the CPU capacity.
832 * task util_avg: 512, 256, 128, 64, 32, 16, 8, ...
892 * Update the current task's runtime statistics.
967 * Preserve migrating task's wait time so wait_start
1042 * amount of time that the task spent sleeping:
1062 * Are we enqueueing a waiting task? (for current tasks
1082 * waiting task:
1101 * We are picking a new current task - update its stats:
1117 * Approximate time to scan a full NUMA task in ms. The task scan period is
1365 static unsigned long score_nearby_nodes(struct task_struct *p, int nid, int maxdist, bool task)
1406 if (task) {
1418 * This seems to result in good task placement.
1432 * These return the fraction of accesses done by a particular task, or
1433 * task group, on a particular numa node. The group weight is given a
1488 * the lifetime of a task. The magic number 4 is based on waiting for
1499 * migration fault to build a temporal task<->page relation. By using
1503 * a task's usage of a particular page (n_p) per total usage of this
1512 * act on an unlikely task<->page relation.
1738 * Clear previous best_cpu/rq numa-migrate flag, since task now
1788 * Used to deter task migration.
1795 * into account that it might be best if task running on the dst_cpu should
1796 * be exchanged with the source task
1844 * node and the best task is.
1852 * "imp" is the fault differential for the source task between the
1854 * the source task and potential destination task. The more negative
1859 * in any group then look only at task weights.
1873 * Compare the group weights. If a task is all by itself
1874 * (not part of a group), use the task weight instead.
1883 /* Discourage picking a task already on its preferred node */
1889 * Encourage picking a task that moves to its preferred node.
1905 * Prefer swapping with a task moving to its preferred node over a
1906 * task that is not.
1915 * task migration might only result in ping pong
1939 /* Evaluate an idle CPU for a task numa move. */
1971 * If a swap candidate must be identified and the current best task
2029 /* Skip this CPU if the source task cannot migrate */
2103 * - the task is part of a numa_group that is interleaved across
2121 /* Only consider nodes where both task and groups benefit */
2136 * If the task is part of a workload that spans multiple NUMA nodes,
2138 * this node as the task's preferred numa node, so the workload can
2140 * A task that migrated to a second choice node will be better off
2181 /* Attempt to migrate a task to a CPU on the preferred node. */
2186 /* This task has no NUMA fault statistics yet */
2191 /* Periodically retry migrating the task to the preferred node */
2195 /* Success if task is already running on preferred CPU */
2261 * If there were no record hinting faults then either the task is
2321 * Get the fraction of time the task has been running since the last
2325 * stats only if the task is so new there are no NUMA statistics yet.
2354 * Determine the preferred nid for a task in a numa_group. This needs to
2472 /* If the task is part of a group prevent parallel updates to group stats */
2622 * the other task will join us.
2685 * Get rid of NUMA staticstics associated with a task (either current or dead).
2686 * If @final is set, the task is dead and has reached refcount zero, so we can
2787 * Retry to migrate task to preferred node periodically, in case it
2874 * Delay this task enough that another task of this mm will likely win
2964 * overloaded system we need to limit overhead on a per task basis.
3033 * task needs to have done some actual work before we bother with
3070 * has completed. This is most likely due to a new task that
3076 * node or if the task was not previously running on
3277 * one task. It takes time for our CPU's grq->avg.load_avg to build up,
3347 * case no task is runnable on a CPU MIN_SHARES=2 should be returned
3448 * Called within set_task_rq() right before setting a task's CPU. The
3462 * We are supposed to update the task to "current" time, then its up to
3465 * time. This will result in the wakee task is less decayed, but giving
3544 * always have at least 1 runnable task, the rq as a whole is always runnable.
3555 * On removal, we'll assume each task is equally runnable; which yields:
3679 /* Update task and its cfs_rq load average */
3925 /* Update task and its cfs_rq load average */
3932 * Track task load average for carrying it to new CPU after migrated, and
3948 * IOW we're enqueueing a task on a new CPU.
3997 * itself from the cfs_rq (task must be off the queue now).
4136 * Skip update of task's estimated utilization when the task has not
4167 * Skip update of task's estimated utilization when its members are
4181 * To avoid overestimation of actual task utilization, skip updates if
4192 * of the task size. This is done by storing the current PELT value
4337 * however the extra weight of the new task will slow them down a
4338 * little, place the new task so that it fits in the slot that
4422 * If we're the current task, we must renormalise before calling
4434 * placed in the past could significantly boost this task to the
4588 * Preempt the current task with a newly woken task if needed:
4601 * The current task ran long enough, ensure it doesn't get
4609 * Ensure that a task that missed wakeup preemption by a
4634 * Any task has to be enqueued before it get to execute on
4663 * 1) keep things fair between processes/task groups
4711 * Prefer last buddy, try to return the CPU to a preempted task.
5713 * current task is from our class and nr_running is low enough
5790 * then put the task into the rbtree:
5802 * Let's add the task's estimated utilization to the cfs_rq's
5867 * result in the load balancer ruining all the task placement
5909 * decreased. We remove the task from the rbtree and
5940 * Bias pick_next to pick a task from this cfs_rq, as
6008 * @p: the task which load should be discounted
6015 * the specified task, whenever the task is currently contributing to the CPU
6031 /* Discount task's util from CPU's util */
6055 /* Discount task's runnable from CPU's runnable */
6081 * A waker of many should wake a different task than the one last awakened
6285 * We need task's util for cpu_util_without, sync it up to
6546 * the task fits. If no CPU is big enough, but there are idle ones, try to
6603 * On asymmetric system, update task utilization because we will check
6604 * that the task fits with cpu's capacity.
6678 * the utilization with the capacity of the CPU that is available for CFS task
6694 * has just got a big task running since a long sleep period. At the same time
6700 * cfs.avg.util_avg or just after migrating tasks and new task wakeups until
6706 * capacity_orig) as it useful for predicting the capacity required after task
6737 * @p: the task which utilization should be discounted
6744 * utilization of the specified task, whenever the task is currently
6755 * as PELT, so it makes little sense to subtract task
6779 /* Discount task's util from CPU's util */
6785 * a) if *p is the only task sleeping on this CPU, then:
6874 * During wake-up, the task isn't enqueued yet and doesn't
6877 * cpu_util() after the task has been enqueued.
6904 * landscape of @pd's CPUs after the task migration, and uses the Energy Model
6906 * task.
6953 * waking task. find_energy_efficient_cpu() looks for the CPU with maximum
6955 * candidate to execute the task. Then, it uses the Energy Model to figure
6986 * their util_avg from the parent task, but those heuristics could hurt
7044 * IOW, placing the task there would make the CPU
7105 * select_task_rq_fair: Select target runqueue for the waking task in domains
7188 * Called immediately before a task is migrated to a new CPU; task_cpu(p) and
7198 * the task on the new runqueue.
7229 * We are supposed to update the task to "current" time, then
7233 * wakee task is less decayed, but giving the wakee more load
7242 /* We have migrated, no longer consider this task hot */
7292 * be smaller, again penalizing the lighter task.
7295 * task is higher priority than the buddy.
7367 * Preempt the current task with a newly woken task if needed:
7397 * We can come here with TIF_NEED_RESCHED already set from new task
7442 * Only set the backward buddy when the current task is still
7478 * likely that a next task is from the same cgroup as the current.
7523 * Since we haven't yet done put_prev_entity and if the selected task
7524 * is a different task than we started out with, try and touch the
7567 * Move the next running task to the front of
7590 * possible for any higher priority task to appear. In that case we
7618 * Account for a descheduled task
7643 * Are we the only task in the tree?
7693 * time to each task. This is expressed in the following equation:
7702 * Where w_i,j is the weight of the j-th runnable task on CPU i. This weight
7711 * fraction of 'recent' time available for SCHED_OTHER task execution. But it
7765 * The task movement gives a factor of O(m), giving a convergence complexity
7795 * w_i,j,k is the weight of the j-th runnable task in the k-th cgroup on CPU i.
7824 * SD_ASYM_CPUCAPACITY only: One task doesn't fit with CPU's capacity
7830 * and the task should be migrated to it instead of running on the
7884 * Is this task likely cache-hot:
7927 * Returns 1, if task migration degrades locality
7928 * Returns 0, if task migration improves locality i.e migration preferred.
7929 * Returns -1, if task migration is not affected by locality.
7990 * can_migrate_task - may task p from runqueue rq be migrated to this_cpu?
8022 * Remember if this task can be migrated to any other CPU in
8046 /* Record that we found atleast one task that could run on dst_cpu */
8063 * 2) task is cache cold, or
8084 * detach_task() -- detach the task for the migration specified in env
8104 * detach_one_task() -- tries to dequeue exactly one task from env->src_rq, as
8107 * Returns a task if successful and NULL otherwise.
8178 /* We've more or less seen every task there is, call it quits */
8212 * scheduler fails to find a good waiting task to
8236 /* This is not a misfit task */
8253 * kernels will stop after the first task is detached to minimize
8294 * attach_task() -- attach the task detached by detach_task() to its new rq.
8306 * attach_one_task() -- attaches the task returned from detach_one_task() to
8596 unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
8764 * Check whether a rq has a misfit task and if it looks like we can actually
8765 * help that task: we can migrate the task to a CPU of higher capacity, or
8766 * the task's current CPU is heavily pressured.
8810 * We consider that a group has spare capacity if the * number of task is
8983 /* Idle cpu can't have misfit task */
8991 /* Check for a misfit task on the cpu */
9043 /* Make sure that there is at least one task to pull */
9107 * theory, there is no need to pull task from such kind of
9138 * Candidate sg has no more than one task per CPU and has higher
9208 * @p: task which should be ignored.
9240 * @p: The task for which we look for the idlest group/CPU.
9271 /* Check if task fits in the group */
9408 * don't try and push the task.
9416 * try and push the task.
9481 * Otherwise, keep the task on this node to stay close
9495 * idle CPUs which means more opportunity to run task.
9638 * to ensure CPU-load equilibrium, try to move any task to fix
9668 * waiting task in this overloaded busiest group. Let's
9875 * busy, let another idle CPU try to pull task.
9939 * task, the next balance pass can still reduce the busiest
10005 * running task. Whatever its utilization, we will fail
10006 * detach the task.
10028 * simply seek the "biggest" misfit task.
10068 * The dst_cpu is idle and the src_cpu CPU has only 1 CFS task.
10069 * It's worth migrating the task if the src_cpu's capacity is reduced
10225 * task is masked "TASK_ON_RQ_MIGRATING", so we can safely
10331 * if the curr task on busiest CPU can't be
10357 /* We've kicked active balancing, force task migration. */
10371 * is only 1 task on the busy runqueue (because we don't call
10468 * least 1 task to be running on each physical CPU where possible, and
10500 /* Is there any task to move? */
10746 * - HK_FLAG_MISC CPUs are used for this task, because HK_FLAG_SCHED not set
10863 * If there's a CFS task and the current CPU has reduced
10893 * to run the misfit task on.
11351 * While browsing the domains, we released the rq lock, a task could
11353 * pretend we pulled a task.
11364 /* Is there a task of a high priority class? */
11457 /* Invoke active balance to force migrate currently running task */
11537 * scheduler tick hitting a task of our scheduling class.
11563 * called on fork with the child task as argument from the parent's context
11599 * Priority of the task has changed. Check to see if we preempt
11600 * the current task.
11640 * When !on_rq, vruntime of the task has usually NOT been normalized.
11645 * - A task which has been woken up by try_to_wake_up() and
11760 * if we can still preempt the current task.
11770 /* Account for a task changing its policy or group.
11772 * This routine is mostly called to set cfs_rq->curr field when a task
11782 * Move the next running task to the front of the list, so our
11940 * Only empty task groups can be destroyed; so we can speculatively
12047 static unsigned int get_rr_interval_fair(struct rq *rq, struct task_struct *task)
12049 struct sched_entity *se = &task->se;