Lines Matching refs:balance
1713 /* Check if run-queue part of active NUMA balance. */
1963 * balance improves then stop the search. While a better swap
2079 * balance domains, some of which do not cross NUMA boundaries.
4937 * load-balance operations.
5972 /* balance early to pull high priority tasks */
7157 sd = NULL; /* Prefer wake_affine over balance flags */
7714 * To achieve this balance we define a measure of imbalance which follows
7735 * of load-balance at each level inv. proportional to the number of CPUs in
7744 * | | `- number of CPUs doing load-balance
7748 * Coupled with a limit on how many tasks we can migrate every balance pass,
8024 * meet load balance goals by pulling other tasks on src_cpu.
8064 * 3) too many balance attempts have failed.
8784 * If we were to balance group-wise we'd place two tasks in the first group and
8789 * by noticing the lower domain failed to reach balance and had difficulty
8794 * find_busiest_group() avoid some of the usual balance conditions to allow it
8817 * any benefit for the load balance.
9483 * a real need of migration, periodic load balance will
9607 * groups of a given sched_domain during load balance.
9608 * @env: load balance environment
9639 * the imbalance. The next load balance will take care of
9658 * amount of load to migrate in order to balance the
9769 * to restore balance.
9809 /* ASYM feature bypasses nice load balance check */
9939 * task, the next balance pass can still reduce the busiest
10126 * to do the newly idle load balance.
10252 * This changes load balance semantics a bit on who can move
10261 * moreover subsequent load balance cycles should correct the
10282 * We failed to reach balance because of affinity.
10315 * Increment the failure counter only on periodic balance.
10316 * We do not want newidle balance, which can be very
10343 * only after active load balance is finished.
10383 * We reach balance although we may have faced some affinity
10397 * We reach balance because all tasks are pinned at this level so
10409 * newidle_balance() disregards balance intervals, so we could
10456 /* used by idle balance, so cpu_busy = 0 */
10609 * This trades load-balance latency on larger machines for less cross talk.
10660 * Stop the load balance at this level. There is another
10722 * balance. Other idle CPUs have already rebalanced with
10725 * balance for itself and we need to update the
10800 * is idle. And the softirq performing nohz idle load balance
10901 * For asymmetric systems, we do not want to nicely balance
10918 * the others are - so just get a nohz balance going if it looks
10998 * the nohz idle balance, which should be avoided.
11057 * Internal function that runs load balance for all idle cpus. The load balance
11058 * can be a simple update of blocked load or a complete load balance with
11121 * If time for next balance is due,
11122 * do the balance.
11163 /* The full idle balance loop has been done */
11176 * In CONFIG_NO_HZ_COMMON case, the idle balance kickee will do the
11285 * for load-balance and preemption/IRQs are still disabled avoiding
11359 /* Move the next balance forward */
11400 * give the idle CPUs a chance to load balance. Else we may
11401 * load balance only within the local sched_domain hierarchy
11408 /* normal load balance */
11457 /* Invoke active balance to force migrate currently running task */
12079 .balance = balance_fair,