Lines Matching refs:enqueued

354      * enqueued) or force our parent to appear after us when it is
355 * enqueued. The fact that we always enqueue bottom-up
437 /* Do the two (enqueued) entities belong to the same group ? */
1053 * Task is being enqueued - update stats:
2844 * without p->mm even though we still had it when we enqueued this
4046 return max(ue.ewma, (ue.enqueued & ~UTIL_AVG_UNCHANGED));
4081 unsigned int enqueued;
4088 enqueued = cfs_rq->avg.util_est.enqueued;
4089 enqueued += _task_util_est(p);
4090 WRITE_ONCE(cfs_rq->avg.util_est.enqueued, enqueued);
4097 unsigned int enqueued;
4104 enqueued = cfs_rq->avg.util_est.enqueued;
4105 enqueued -= min_t(unsigned int, enqueued, _task_util_est(p));
4106 WRITE_ONCE(cfs_rq->avg.util_est.enqueued, enqueued);
4148 if (ue.enqueued & UTIL_AVG_UNCHANGED) {
4152 last_enqueued_diff = ue.enqueued;
4158 ue.enqueued = task_util(p);
4160 if (ue.ewma < ue.enqueued) {
4161 ue.ewma = ue.enqueued;
4170 last_ewma_diff = ue.enqueued - ue.ewma;
4171 last_enqueued_diff -= ue.enqueued;
4193 * as ue.enqueued and by using this value to update the Exponential
4209 ue.enqueued |= UTIL_AVG_UNCHANGED;
4634 * Any task has to be enqueued before it get to execute on
5757 /* Runqueue only has SCHED_IDLE tasks enqueued */
6010 * The load of a CPU is defined by the load of tasks currently enqueued on that
6728 util = max(util, READ_ONCE(cfs_rq->avg.util_est.enqueued));
6740 * enqueued on that CPU as well as tasks which are currently sleeping after an
6809 unsigned int estimated = READ_ONCE(cfs_rq->avg.util_est.enqueued);
6850 * Predicts what cpu_util(@cpu) would return if @p was migrated (and enqueued)
6871 util_est = READ_ONCE(cfs_rq->avg.util_est.enqueued);
6874 * During wake-up, the task isn't enqueued yet and doesn't
6875 * appear in the cfs_rq->avg.util_est.enqueued of any rq,
6877 * cpu_util() after the task has been enqueued.
11352 * have been enqueued in the meantime. Since we're not going idle,