Lines Matching defs:this
65 * NOTE: this latency value is not the same as the concept of
314 /* runqueue on which this entity is (to be) queued */
320 /* runqueue "owned" by this group */
356 * reduces this to two cases and a special case for the root
365 * the list, this means to put the child at the tail
415 * we finally want to del. In this case, tmp_alone_branch moves
505 /* runqueue "owned" by this group */
721 * this period because otherwise the slices get too small.
807 /* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
822 * To solve this problem, we also cap the util_avg of successive tasks to
1503 * a task's usage of a particular page (n_p) per total usage of this
1506 * Our periodic faults will sample this probability and getting the
1781 /* Would this change make things worse? */
1837 /* Skip this swap candidate if cannot move to the source cpu. */
1843 * Skip this swap candidate if it is not moving to its preferred
1891 * 1998 (see SMALLIMP and task_weight for why) but in this
2029 /* Skip this CPU if the source task cannot migrate */
2138 * this node as the task's preferred numa node, so the workload can
2205 * Find out how many nodes on the workload is actively running on. Do this by
2330 /* Use the start of this time slice to avoid calculations. */
2405 /* Are there nodes at this distance from each other? */
2813 * much of an issue though, since this is just used for
2842 * NOTE: make sure not to dereference p->mm before this check,
2844 * without p->mm even though we still had it when we enqueued this
2874 * Delay this task enough that another task of this mm will likely win
2882 virtpages = pages * 0x8; /* Scan up to this much virtual space */
3246 * All this does is approximate the hierarchical proportion which includes that
3257 * there, done that) we approximate it with this average stuff. The average
3275 * to be exactly that of course -- this leads to transients in boundary
3397 * There are a few boundary cases this might miss but it should
3497 * propagate its contribution. The key to this propagation is the invariant
3532 * runnable sum, runqueues can NOT do this.
3539 * Another reason this doesn't work is that runnable isn't a 0-sum entity.
3559 * XXX: only do this for the part of runnable > running ?
3772 * call update_tg_load_avg() when this function returns true.
3800 * cfs->util_sum. Although this is not a problem by itself, detaching
3835 * attach_entity_load_avg - attach this entity to its cfs_rq load avg
3839 * Must call update_cfs_rq_load_avg() before this, since we rely on
3890 * detach_entity_load_avg - detach this entity from its cfs_rq load avg
3894 * Must call update_cfs_rq_load_avg() before this, since we rely on
4119 * NOTE: this only works when value + maring < INT_MAX.
4182 * we cannot grant there is idle time in this CPU.
4193 * as ue.enqueued and by using this value to update the Exponential
4399 * this way the vruntime transition between RQs is done when both
4412 * this way we don't have the most up-to-date min_vruntime on the originating
4434 * placed in the past could significantly boost this task to the
4578 * except when: DEQUEUE_SAVE && !DEQUEUE_MOVE, in this case we'll be
4662 * Pick the next process, keeping these things in mind, in this order:
4706 * Someone really wants this to run. If it's not unfair, run it.
4860 /* note: this is a positive sum as runtime_remaining <= 0 */
5125 /* At this point se is NULL and we are at root level */
5166 /* By the above check, this should never be true */
5199 * used to track this state.
5246 * unthrottle, this also covers the case in which the new bandwidth is
5258 /* a cfs_rq won't donate quota below this amount */
5441 * state (e.g. set_curr_task), in this case we're finished.
5860 /* At this point se is NULL and we are at root level */
5872 * A better way of solving this problem would be to wait for
5937 /* Avoid re-evaluating load for this entity: */
5940 * Bias pick_next to pick a task from this cfs_rq, as
5968 /* At this point se is NULL and we are at root level */
6360 * Scans the local SMT mask to see if the entire core is idle, and records this
6363 * Since SMT siblings share all cache levels, inspecting this limited remote
6393 * Scan the entire LLC domain for idle cores; this dynamically switches off if
6483 * Scan the LLC domain for idle CPUs; this is dynamically regulated by
6485 * average idle time for this rq (as found in rq->avg_idle).
6493 int this = smp_processor_id();
6521 time = cpu_clock(this);
6538 time = cpu_clock(this) - time;
6688 * the running time on this CPU scaled by capacity_curr.
6757 * cpu_util for this case.
6785 * a) if *p is the only task sleeping on this CPU, then:
6790 * b) if other tasks are SLEEPING on this CPU, which is now exiting
6958 * The rationale for this heuristic is as follows. In a performance domain,
6963 * only includes active power costs. With this model, if we assume that
6971 * ways to tell with the current Energy Model if this is actually a good
6974 * a good thing for latency, and this is consistent with the idea that most
6987 * other use-cases too. So, until someone finds a better way to solve this,
7046 * much capacity we can get out of the CPU; this is
7071 /* Evaluate the energy impact of using this CPU. */
7106 * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE,
7149 * If both 'cpu' and 'prev_cpu' are part of this domain,
7196 * deal with this by subtracting the old and adding the new
7242 /* We have migrated, no longer consider this task hot */
7400 * Note: this also catches the edge-case of curr being in a throttled
7429 * triggering this preemption.
7714 * To achieve this balance we define a measure of imbalance which follows
7720 * function space it is obvious this converges, in the discrete case we get
7749 * this makes (5) the runtime complexity of the balancer.
7800 * [XXX write more on how we solve this.. _after_ merging pjt's patches that
7801 * rewrite all of this once again.]
7884 * Is this task likely cache-hot:
8001 * 2) cannot be migrated to this CPU due to cpus_ptr, or
8022 * Remember if this task can be migrated to any other CPU in
8124 * Right now, this is only the second place where
8284 * Right now, this is one of only two places we collect this stat
8608 struct sched_group *busiest; /* Busiest group in this sd */
8609 struct sched_group *local; /* Local group in this sd */
8785 * two tasks in the second group. Clearly this is undesired as it will overload
8788 * The current solution to this issue is detecting the skew in the first group
8792 * When this is so detected; this group becomes a candidate for busiest; see
8934 * @sgs: variable to hold the statistics for this group.
9007 /* Check if dst CPU is idle and preferred to this group */
9239 * @sgs: variable to hold the statistics for this group.
9372 /* Skip over this group if it has no CPUs allowed */
9481 * Otherwise, keep the task on this node to stay close
9509 * @sds: variable to hold the statistics for this sched_domain.
9668 * waiting task in this overloaded busiest group. Let's
9784 * this level.
9848 * Don't pull any tasks if this group is already above the
9874 * result the local one too) but this CPU is already
9883 * and there is no imbalance between this and busiest
9887 * on another group. Of course this applies only if
9942 * If we cannot move enough load due to this classification
10143 /* Are we the first CPU of this group ? */
10256 * load to given_cpu. In rare situations, this may cause
10292 /* All tasks on this runqueue were pinned by CPU affinity */
10397 * We reach balance because all tasks are pinned at this level so
10410 * repeatedly reach this code, which would lead to balance_interval
10649 * Decay the newidle max times here because this is a regular
10660 * Stop the load balance at this level. There is another
10721 * If this CPU has been elected to perform the nohz idle
10746 * - HK_FLAG_MISC CPUs are used for this task, because HK_FLAG_SCHED not set
10919 * like this LLC domain has tasks we could move.
11079 * We assume there will be no idle load after this update and clear
11107 * If this CPU gets work to do, stop the load balancing
11335 * now runnable tasks on this rq.
11389 * is a possibility this nohz kicked cpu could be isolated. Hence
11397 * If this CPU has a pending nohz_balance_kick, then do the
11613 * Reschedule if we are currently running on this runqueue and
11615 * this runqueue and our priority is higher than the current's