Lines Matching defs:all
433 /* Iterate thr' all leaf cfs_rq's on a runqueue */
1364 /* Handle placement on systems where not all nodes are directly connected. */
1560 * The node is overloaded and can't provide expected CPU cycles to all
1566 /* Cached statistics for all CPUs within a node */
1663 * Gather all necessary information to make NUMA balancing placement
1873 * Compare the group weights. If a task is all by itself
2262 * completely idle or all activity is areas that are not of interest
2363 /* Direct connections between all NUMA nodes. */
2501 * Normalize the faults_from, so all tasks in a group
2504 * little over-all impact on throughput, and thus their
2687 * safely free all relevant data structures. Otherwise, there might be
3247 * global sum we all love to hate.
3286 * That is, the sum collapses because all other CPUs are idle; the UP scenario.
3642 * assuming all tasks are equally runnable.
3763 * The cfs_rq avg is the direct sum of all its entities (blocked and runnable)
3764 * avg. The immediate corollary is that all (fair) tasks must be attached, see
4675 * still in the tree, provided there was anything in the tree at all.
5867 * result in the load balancer ruining all the task placement
6127 * interrupt intensive workload could force all tasks onto one
6363 * Since SMT siblings share all cache levels, inspecting this limited remote
6959 * all the most energy efficient CPU candidates (according to the Energy
7695 * W_i,n/P_i == W_j,n/P_j for all i,j (1)
7731 * for all i,j solution, we create a tree of CPUs that follows the hardware
7746 * `- sum over all levels
7762 * A^(log_2 n)_i,j != 0 for all i,j (7)
7801 * rewrite all of this once again.]
7806 enum fbq_type { regular, remote, all };
7840 * The CPU is overloaded and can't provide expected CPU cycles to all
8168 * We don't want to steal all, otherwise we may be treated likewise,
8320 * attach_tasks() -- attaches all tasks detached by detach_tasks() to their
8495 * Compute the hierarchical load factor for cfs_rq and all its ascendants.
8610 unsigned long total_load; /* Total load of all groups in sd */
8611 unsigned long total_capacity; /* Total capacity of all groups in sd */
8612 unsigned long avg_load; /* Average load across all groups in sd */
9108 * group because tasks have all compute capacity that they need
9160 return all;
9171 return all;
9176 return all;
9627 * In case of asym capacity, we will try to migrate all load to
9729 * Both group are or will become overloaded and we're trying to get all
9816 * work because they assume all things are equal, which typically
9864 /* Try to move all excess tasks to child's sibling domain */
9933 * - all: there is no distinction
10052 * lower priority CPUs in order to pack all tasks in the
10125 * In the newly idle case, we will allow all the CPUs
10169 .fbq_type = all,
10397 * We reach balance because all tasks are pinned at this level so
10650 * visit to all the domains. Decay ~1% per second.
10745 * load balancing for all the idle CPUs.
11057 * Internal function that runs load balance for all idle cpus. The load balance
11061 * through all idle CPUs.
11177 * rebalancing for all the cpus for whom scheduler ticks are stopped.