Home
last modified time | relevance | path

Searched refs:runnable (Results 1 - 2 of 2) sorted by relevance

/device/soc/rockchip/common/sdk_linux/kernel/sched/
H A Dpelt.c142 static __always_inline u32 accumulate_sum(u64 delta, struct sched_avg *sa, unsigned long load, unsigned long runnable, in accumulate_sum() argument
168 * runnable = running = 0; in accumulate_sum()
182 if (runnable) { in accumulate_sum()
183 sa->runnable_sum += runnable * contrib << SCHED_CAPACITY_SHIFT; in accumulate_sum()
193 * We can represent the historical contribution to runnable average as the
194 * coefficients of a geometric series. To do this we sub-divide our runnable
202 * Let u_i denote the fraction of p_i that the entity was runnable.
220 static __always_inline int ___update_load_sum(u64 now, struct sched_avg *sa, unsigned long load, unsigned long runnable, in ___update_load_sum() argument
247 * running is a subset of runnable (weight) so running can't be set if in ___update_load_sum()
248 * runnable i in ___update_load_sum()
[all...]
H A Dfair.c790 /* Give new sched_entity start runnable values to heavy its load in infant time */
1569 unsigned long runnable; member
1623 ((ns->compute_capacity * imbalance_pct) < (ns->runnable * FAIR_ONEHUNDRED)))) { in numa_classify()
1629 ((ns->compute_capacity * imbalance_pct) > (ns->runnable * FAIR_ONEHUNDRED)))) { in numa_classify()
1681 ns->runnable += cpu_runnable(rq); in update_numa_stats()
3345 * on an 8-core system with 8 tasks each runnable on one CPU shares has in calc_group_shares()
3347 * case no task is runnable on a CPU MIN_SHARES=2 should be returned in calc_group_shares()
3502 * _IFF_ we look at the pure running and runnable sums. Because they
3506 * and simply copies the running/runnable sum over (but still wrong, because
3513 * And since, like util, the runnable par
6045 unsigned int runnable; cpu_runnable_without() local
[all...]

Completed in 10 milliseconds