Lines Matching refs:runnable
790 /* Give new sched_entity start runnable values to heavy its load in infant time */
1569 unsigned long runnable;
1623 ((ns->compute_capacity * imbalance_pct) < (ns->runnable * FAIR_ONEHUNDRED)))) {
1629 ((ns->compute_capacity * imbalance_pct) > (ns->runnable * FAIR_ONEHUNDRED)))) {
1681 ns->runnable += cpu_runnable(rq);
3345 * on an 8-core system with 8 tasks each runnable on one CPU shares has
3347 * case no task is runnable on a CPU MIN_SHARES=2 should be returned
3502 * _IFF_ we look at the pure running and runnable sums. Because they
3506 * and simply copies the running/runnable sum over (but still wrong, because
3513 * And since, like util, the runnable part should be directly transferable,
3532 * runnable sum, runqueues can NOT do this.
3539 * Another reason this doesn't work is that runnable isn't a 0-sum entity.
3540 * Imagine a rq with 2 tasks that each are runnable 2/3 of the time. Then the
3541 * rq itself is runnable anywhere between 2/3 and 1 depending on how the
3542 * runnable section of these tasks overlap (or not). If they were to perfectly
3543 * align the rq as a whole would be runnable 2/3 of the time. If however we
3544 * always have at least 1 runnable task, the rq as a whole is always runnable.
3552 * We can construct a rule that adds runnable to a rq by assuming minimal
3555 * On removal, we'll assume each task is equally runnable; which yields:
3559 * XXX: only do this for the part of runnable > running ?
3604 /* Set new sched_entity's runnable */
3608 /* Update parent cfs_rq runnable */
3634 * Add runnable; clip at LOAD_AVG_MAX. Reflects that until
3635 * the CPU is saturated running == runnable.
3642 * assuming all tasks are equally runnable.
3648 /* But make sure to not inflate se's runnable */
3654 * Rescale running sum to be in the same range as runnable sum
3763 * The cfs_rq avg is the direct sum of all its entities (blocked and runnable)
4756 * Ensure that runnable average is periodically updated.
5012 /* freeze hierarchy runnable averages while throttled */
6045 unsigned int runnable;
6053 runnable = READ_ONCE(cfs_rq->avg.runnable_avg);
6055 /* Discount task's runnable from CPU's runnable */
6056 lsub_positive(&runnable, p->se.avg.runnable_avg);
6058 return runnable;
6681 * cfs_rq.avg.util_avg is the sum of running time of runnable tasks plus the
6682 * recent utilization of currently non-runnable tasks on a CPU. It represents
6802 * runnable on that CPU.
7489 * have to consider cfs_rq->curr. If it is still a runnable
7672 /* throttled hierarchies are not runnable */
7702 * Where w_i,j is the weight of the j-th runnable task on CPU i. This weight
7795 * w_i,j,k is the weight of the j-th runnable task in the k-th cgroup on CPU i.
8589 unsigned long group_runnable; /* Total runnable time over the CPUs of the group */
9768 * Also calculates the amount of runnable load which should be moved
11335 * now runnable tasks on this rq.