Lines Matching refs:value
95 * NOTE: this latency value is not the same as the concept of
130 * This value is kept at sysctl_sched_latency/sysctl_sched_min_granularity
270 * Increase the granularity value when there are more CPUs,
703 * As measured, the max (key * weight) value was ~44 bits for a kernel build.
1147 * value. Moreover, the sum of the util_avgs may be divergent, such
2294 * the value is, the more remote accesses that would be expected to
3629 * Explicitly do a load-store to ensure the intermediate value never hits
3649 * Explicitly do a load-store to ensure the intermediate value never hits
3966 * of a group with small tg->shares value. It is a floor value which is
4108 * However, because tg->load_avg is a global value there are performance
4112 * differential update where we store the last value we propagated. This in
4498 * is observed the old clock_pelt_idle value and the new clock_idle,
4874 * Check if a (signed) value is within a specified (unsigned) margin,
4879 * NOTE: this only works when value + margin < INT_MAX.
4881 static inline bool within_margin(int value, int margin)
4883 return ((unsigned int)(value + margin - 1) < (2 * margin - 1));
4927 * already ~1% close to its last activation value.
4949 * of the task size. This is done by storing the current PELT value
4950 * as ue.enqueued and by using this value to update the Exponential
5068 * actual fitness value here. We only care if uclamp_max fits
5077 * need to take into account the boosted value fits the CPU without
7682 * capacity_orig value through the cpuset), the key will be set
7721 * The unit of the return value must be the same as the one of CPU capacity
8826 * is derived from the nice value as per sched_prio_to_weight[].
9339 * value. Make sure that env->imbalance decreases
11408 * Max backoff if we encounter pinned tasks. Pretty arbitrary value, but
12109 * Ensure the rq-wide value also decays but keep it at a