Lines Matching refs:this
95 * NOTE: this latency value is not the same as the concept of
412 * reduces this to two cases and a special case for the root
422 * the list, this means to put the child at the tail
474 * we finally want to del. In this case, tmp_alone_branch moves
672 * Specifically, this is the weighted average of all entity virtual runtimes.
674 * [[ NOTE: this is only equal to the ideal scheduler under the condition
680 * Also see the comment in place_entity() that deals with this. ]]
736 * For this to be so, the result of this function must have a left bias.
768 * Limit this to either double the slice length with a minimum of TICK_NSEC
775 * XXX could add max_slice to the augmented data to track this.
934 * We can do this in O(log n) time due to an augmented RB-tree. The
964 * If this entity is not eligible, try the left subtree.
996 /* min_deadline is at this node, no need to look right */
1086 * this is probably good enough.
1137 /* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
1150 * To solve this problem, we also cap the util_avg of successive tasks to
1296 * will be 0.So it will let the delta wrong. We need to avoid this
1415 * threshold. Above this threshold, individual tasks may be contending
1969 * a task's usage of a particular page (n_p) per total usage of this
1972 * Our periodic faults will sample this probability and getting the
2222 /* Would this change make things worse? */
2276 /* Skip this swap candidate if cannot move to the source cpu. */
2281 * Skip this swap candidate if it is not moving to its preferred
2339 * 1998 (see SMALLIMP and task_weight for why) but in this
2475 /* Skip this CPU if the source task cannot migrate */
2581 * this node as the task's preferred numa node, so the workload can
2642 * Find out how many nodes the workload is actively running on. Do this by
2765 /* Use the start of this time slice to avoid calculations. */
2837 /* Are there nodes at this distance from each other? */
3246 * much of an issue though, since this is just used for
3294 * NOTE: make sure not to dereference p->mm before this check,
3296 * without p->mm even though we still had it when we enqueued this
3324 * Delay this task enough that another task of this mm will likely win
3332 virtpages = pages * 8; /* Scan up to this much virtual space */
3712 * Proof: For contradiction assume this is not true, so we can
3873 * All this does is approximate the hierarchical proportion which includes that
3884 * there, done that) we approximate it with this average stuff. The average
3902 * to be exactly that of course -- this leads to transients in boundary
4016 * There are a few boundary cases this might miss but it should
4048 * break this.
4166 * propagate its contribution. The key to this propagation is the invariant
4201 * runnable sum, runqueues can NOT do this.
4208 * Another reason this doesn't work is that runnable isn't a 0-sum entity.
4228 * XXX: only do this for the part of runnable > running ?
4534 * call update_tg_load_avg() when this function returns true.
4565 * cfs->util_sum. Although this is not a problem by itself, detaching
4601 * attach_entity_load_avg - attach this entity to its cfs_rq load avg
4605 * Must call update_cfs_rq_load_avg() before this, since we rely on
4656 * detach_entity_load_avg - detach this entity from its cfs_rq load avg
4660 * Must call update_cfs_rq_load_avg() before this, since we rely on
4879 * NOTE: this only works when value + margin < INT_MAX.
4940 * we cannot grant there is idle time in this CPU.
4950 * as ue.enqueued and by using this value to update the Exponential
5023 * | | | | | | | (util somewhere in this region)
5045 * beyond this performance level anyway.
5207 * will move 'time' backwards, this can screw around with the lag of
5221 * average and compensate for this, otherwise lag can quickly
5342 /* Entity has migrated, no longer consider this task hot */
5430 * except when: DEQUEUE_SAVE && !DEQUEUE_MOVE, in this case we'll be
5488 * Pick the next process, keeping these things in mind, in this order:
5649 /* note: this is a positive sum as runtime_remaining <= 0 */
5837 /* Avoid re-evaluating load for this entity: */
5860 /* At this point se is NULL and we are at root level*/
5954 /* At this point se is NULL and we are at root level*/
5985 * the CSD list. However, this RCU critical section annotates the
6073 /* By the above checks, this should never be true */
6117 * used to track this state.
6161 * unthrottle, this also covers the case in which the new bandwidth is
6173 /* a cfs_rq won't donate quota below this amount */
6338 * state (e.g. set_curr_task), in this case we're finished.
6469 * list, though this race is very rare. In order for this to occur, we
6472 * CSD item but the remote cpu has not yet processed it. To handle this,
6474 * guaranteed at this point that no additional cfs_rq of this group can
6833 /* At this point se is NULL and we are at root level*/
6845 * A better way of solving this problem would be to wait for
6897 /* Avoid re-evaluating load for this entity: */
6900 * Bias pick_next to pick a task from this cfs_rq, as
6929 /* At this point se is NULL and we are at root level*/
7328 * Scans the local SMT mask to see if the entire core is idle, and records this
7331 * Since SMT siblings share all cache levels, inspecting this limited remote
7357 * Scan the entire LLC domain for idle cores; this dynamically switches off if
7442 * Scan the LLC domain for idle CPUs; this is dynamically regulated by
7444 * average idle time for this rq (as found in rq->avg_idle).
7452 int this = smp_processor_id();
7487 time = cpu_clock(this);
7522 time = cpu_clock(this) - time;
7650 * essentially a sync wakeup. An obvious example of this
7735 * of such a task would be significantly decayed at this point of time.
7750 * though since this is useful for predicting the CPU capacity required
7861 * cpu_util for this case.
8030 * The rationale for this heuristic is as follows. In a performance domain,
8035 * only includes active power costs. With this model, if we assume that
8043 * ways to tell with the current Energy Model if this is actually a good
8046 * a good thing for latency, and this is consistent with the idea that most
8059 * other use-cases too. So, until someone finds a better way to solve this,
8140 * much capacity we can get out of the CPU; this is
8252 * that have the relevant SD flag set. In practice, this is SD_BALANCE_WAKE,
8301 * If both 'cpu' and 'prev_cpu' are part of this domain,
8512 * Note: this also catches the edge-case of curr being in a throttled
8837 * To achieve this balance we define a measure of imbalance which follows
8843 * function space it is obvious this converges, in the discrete case we get
8872 * this makes (5) the runtime complexity of the balancer.
8923 * [XXX write more on how we solve this.. _after_ merging pjt's patches that
8924 * rewrite all of this once again.]
9017 * Is this task likely cache-hot:
9133 * 2) cannot be migrated to this CPU due to cpus_ptr, or
9152 * Remember if this task can be migrated to any other CPU in
9258 * Right now, this is only the second place where
9420 * Right now, this is one of only two places we collect this stat
9711 struct sched_group *busiest; /* Busiest group in this sd */
9712 struct sched_group *local; /* Local group in this sd */
9891 * two tasks in the second group. Clearly this is undesired as it will overload
9894 * The current solution to this issue is detecting the skew in the first group
9898 * When this is so detected; this group becomes a candidate for busiest; see
10136 * @sgs: variable to hold the statistics for this group.
10214 /* Check if dst CPU is idle and preferred to this group */
10471 * @sgs: variable to hold the statistics for this group.
10607 /* Skip over this group if it has no CPUs allowed */
10617 /* Skip over this group if no cookie matched */
10773 * So the write of this hint only occurs during periodic load
10791 * The reason to choose 85% as the threshold is because this is the
10837 * @sds: variable to hold the statistics for this sched_domain.
10994 * waiting task in this overloaded busiest group. Let's
11130 * this level.
11188 * Don't pull any tasks if this group is already above the
11215 * result the local one too) but this CPU is already
11231 * and there is no imbalance between this and busiest
11235 * on another group. Of course this applies only if
11291 * If we cannot move enough load due to this classification
11552 /* Are we the first CPU of this group ? */
11612 /* Clear this flag as soon as we find a pullable task */
11668 * load to given_cpu. In rare situations, this may cause
11704 /* All tasks on this runqueue were pinned by CPU affinity */
11800 * We reach balance because all tasks are pinned at this level so
11813 * repeatedly reach this code, which would lead to balance_interval
12061 * Decay the newidle max times here because this is a regular
12068 * Stop the load balance at this level. There is another
12138 * - HK_TYPE_MISC CPUs are used for this task, because HK_TYPE_SCHED not set
12312 * like this LLC domain has tasks we could move.
12386 /* If this CPU is going down, then nothing needs to be done: */
12475 * We assume there will be no idle load after this update and clear
12510 * If this CPU gets work to do, stop the load balancing
12737 * now runnable tasks on this rq.
12789 * is a possibility this nohz kicked cpu could be isolated. Hence
12796 * If this CPU has a pending nohz_balance_kick, then do the
12959 * sched_slice() considers only this active rq and it gets the
12962 * tasks on this CPU and the forced idle CPU. Ideally, we should
13129 * Reschedule if we are currently running on this runqueue and
13131 * this runqueue and our priority is higher than the current's