Lines Matching defs:time

67  * and have no persistent notion like in traditional, time-slice
160 * each time a cfs_rq requests quota.
164 * we will always only issue the remaining available time.
601 /* ensure we never gain time by being placed backwards. */
735 * We calculate the wall-time slice from the period by taking a part
790 /* Give new sched_entity start runnable values to heavy its load in infant time */
967 * Preserve migrating task's wait time so wait_start
968 * time stamp can be adjusted to accumulate wait time
1040 * Blocking time is in units of nanosecs, so shift by
1042 * amount of time that the task spent sleeping:
1117 * Approximate time to scan a full NUMA task in ms. The task scan period is
1504 * page (n_t) (in a given time-span) to a probability.
2321 * Get the fraction of time the task has been running since the last
2330 /* Use the start of this time slice to avoid calculations. */
2338 /* Avoid time going backwards, prevent potential divide error: */
2875 * the next time around.
2961 * Make sure tasks use at least 32x as much time to run other code
3208 /* commit outstanding execution time */
3277 * one task. It takes time for our CPU's grq->avg.load_avg to build up,
3462 * We are supposed to update the task to "current" time, then its up to
3464 * getting what current time is, so simply throw away the out-of-date
3465 * time. This will result in the wakee task is less decayed, but giving
3540 * Imagine a rq with 2 tasks that each are runnable 2/3 of the time. Then the
3543 * align the rq as a whole would be runnable 2/3 of the time. If however we
3736 * waste of time to try to decay it:
3760 * @now: current time, as per cfs_rq_clock_pelt()
4144 * If the PELT values haven't changed since enqueue time,
4182 * we cannot grant there is idle time in this CPU.
4350 * Halve their sleep time's effect, to allow
4360 /* ensure we never gain time by being placed backwards. */
4433 * moment in time, instead of some random moment in the past. Being
4536 * Update run-time statistics of the 'current'.
4635 * a CPU. So account for the time it spent waiting on the
4751 * Update run-time statistics of the 'current'.
4972 /* group is entering throttled state, stop time */
5260 /* minimum remaining period time to redistribute slack quota */
6064 * Only decay a single time; tasks that have less then 1 wakeup per
6485 * average idle time for this rq (as found in rq->avg_idle).
6492 u64 time;
6521 time = cpu_clock(this);
6538 time = cpu_clock(this) - time;
6539 update_avg(&this_sd->avg_scan_cost, time);
6681 * cfs_rq.avg.util_avg is the sum of running time of runnable tasks plus the
6688 * the running time on this CPU scaled by capacity_curr.
6694 * has just got a big task running since a long sleep period. At the same time
6701 * the average stabilizes with the new running time. We need to check that the
6930 * Busy time computation: utilization clamping is not
7189 * cfs_rq_of(p) references at time of call are still valid and identify the
7229 * We are supposed to update the task to "current" time, then
7231 * have difficulty in getting what current time is, so simply
7232 * throw away the out-of-date time. This will result in the
7285 * Since its curr running now, convert the gran from real-time
7286 * to virtual-time in his units.
7654 * Update run-time statistics of the 'current'.
7693 * time to each task. This is expressed in the following equation:
7711 * fraction of 'recent' time available for SCHED_OTHER task execution. But it
7778 * time.
8589 unsigned long group_runnable; /* Total runnable time over the CPUs of the group */
9732 * below the average load. At the same time, we also don't want to
10258 * _independently_ and at _same_ time to move some load to
10411 * skyrocketting in a short amount of time. Skip the balance_interval
10639 /* Earliest time when we have to do rebalance again */
10769 * Kick a CPU to do the nohz balancing, if it is time for it. We pick any
11050 * Each time a cpu enter idle, we assume that it has blocked load and
11065 /* Earliest time when we have to do rebalance again */
11080 * the has_blocked flag. If a cpu enters idle in the mean time, it will
11121 * If time for next balance is due,
11210 /* Will wake up very soon. No time for doing anything else */
11272 * measure the duration of idle_balance() as idle time.
11414 * Trigger the SCHED_SOFTIRQ if it is time to do periodic load balancing.