Home
last modified time | relevance | path

Searched refs:balance (Results 1 - 4 of 4) sorted by relevance

/device/soc/rockchip/common/sdk_linux/kernel/sched/
H A Dsched.h659 * This list is used during load balance.
1899 int (*balance)(struct rq *rq, struct task_struct *prev, struct rq_flags *rf); member
H A Drt.c1649 * picked for load-balance and preemption/IRQs are still in balance_rt()
2791 /* cpu has max capacity, no need to do balance */ in check_for_migration_rt()
2847 .balance = balance_rt,
H A Dcore.c75 * Number of tasks to iterate in a single balance run.
4744 * We can terminate the balance pass as soon as we know there is in put_prev_task_balance()
4749 if (class->balance(rq, prev, rf)) { in put_prev_task_balance()
5966 /* Run balance callbacks after we've adjusted the PI chain: */ in __sched_setscheduler()
7452 * because !cpu_active at this point, which means load-balance in migrate_tasks()
H A Dfair.c1713 /* Check if run-queue part of active NUMA balance. */ in task_numa_assign()
1963 * balance improves then stop the search. While a better swap in task_numa_compare()
2079 * balance domains, some of which do not cross NUMA boundaries. in task_numa_migrate()
4937 * load-balance operations.
5972 /* balance early to pull high priority tasks */ in dequeue_task_fair()
7157 sd = NULL; /* Prefer wake_affine over balance flags */ in select_task_rq_fair()
7714 * To achieve this balance we define a measure of imbalance which follows
7735 * of load-balance at each level inv. proportional to the number of CPUs in
7744 * | | `- number of CPUs doing load-balance
7748 * Coupled with a limit on how many tasks we can migrate every balance pas
[all...]

Completed in 30 milliseconds