Lines Matching defs:busy
3032 * we (mostly) drive the selection from busy threads and that the
9106 * Select the fully busy group with highest avg_load. In
9283 * Computing avg_load makes sense only when group is fully busy or
9346 * find_idlest_group() finds and returns the least busy CPU group within the
9706 * Local is fully busy but has to take more load to relieve the
9760 * nr_idle : dst_cpu is not busy and the number of idle CPUs is quite
9799 /* There is no busy sibling group to pull tasks from */
9875 * busy, let another idle CPU try to pull task.
10051 * ASYM_PACKING needs to force migrate tasks from busy but
10371 * is only 1 task on the busy runqueue (because we don't call
10439 * Reduce likelihood of busy balancing at higher domains racing with
10636 int busy = idle != CPU_IDLE && !sched_idle_cpu(cpu);
10671 interval = get_sd_balance_interval(sd, busy);
10688 busy = idle != CPU_IDLE && !sched_idle_cpu(cpu);
10691 interval = get_sd_balance_interval(sd, busy);
10743 * - When one of the busy CPUs notice that there may be an idle rebalancing
10825 * busy tick after returning from idle, we will update the busy stats.
10917 * other CPUs are idle). We can't really know from here how busy