Lines Matching refs:group
496 /* Do the two (enqueued) entities belong to the same group ? */
1132 * nothing has been attached to the task group yet.
1641 static inline unsigned long group_faults_cpu(struct numa_group *group, int nid)
1643 return group->faults[task_faults_idx(NUMA_CPU, nid, 0)] +
1644 group->faults[task_faults_idx(NUMA_CPU, nid, 1)];
1673 * considered part of a numa group's pseudo-interleaving set. Migrations
1716 * of nodes, and move tasks towards the group with the most
1719 * of each group. Skip other nodes.
1751 * task group, on a particular numa node. The group weight is given a
1752 * larger multiplier, in order to group tasks together that are almost
2297 * If dst and source tasks are in the same NUMA group, or not
2298 * in any group then look only at task weights.
2303 * Do not swap within a group or between tasks that have
2304 * no group if there is spare capacity. Swapping does
2315 * tasks within a group over tiny differences.
2321 * Compare the group weights. If a task is all by itself
2322 * (not part of a group), use the task weight instead.
2549 * multiple NUMA nodes; in order to better consolidate the group,
2828 * inside the highest scoring group of nodes. The nodemask tricks
2846 /* Sum group's NUMA faults; includes a==b case. */
2855 /* Remember the top group. */
2861 * just one node left in each "group", the
2900 /* If the task is part of a group prevent parallel updates to group stats */
2928 * Normalize the faults_from, so all tasks in a group
2946 * safe because we can only change our own group
3059 * Only join the other group if its bigger; if we're the bigger group,
3876 * That is, the weight of a group entity, is the proportional share of the
3877 * group weight based on the group runqueue weights. That is:
3903 * conditions. In specific, the case where the group was idle and we start the
3966 * of a group with small tg->shares value. It is a floor value which is
3968 * the group on a CPU.
3970 * E.g. on 64-bit for a group with tg->shares of scale_load(15)=15*1024
3981 * Recomputes the group entity based on the current state of its group
4167 * that for each group:
4176 * the group entity and group rq do not have their PELT windows aligned).
4701 * track group sched_entity load average for task_h_load calc in migration
5319 * h_nr_running of its group cfs_rq.
5321 * its group cfs_rq
5405 * h_nr_running of its group cfs_rq.
5407 * - For group entity, update its weight to reflect the new share
5408 * of its group cfs_rq.
5598 * default period for cfs group bandwidth.
5721 * Ensure that neither of the group entities corresponding to src_cpu or
5722 * dest_cpu are members of a throttled hierarchy when performing group
5771 /* group is entering throttled state, stop time */
5987 * race with group being freed in the window between removing it
6287 * When a group wakes up we want to make sure that its quota is not already
6296 /* an active group must be handled by the update_curr()->put() path */
6300 /* ensure the group is not already throttled */
6470 * must have raced with the last task leaving the group while there
6474 * guaranteed at this point that no additional cfs_rq of this group can
6495 * The race is harmless, since modifying bandwidth settings of unhooked group
7179 * find_idlest_group_cpu - find the idlest CPU among the CPUs in the group.
7182 find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
7192 if (group->group_weight == 1)
7193 return cpumask_first(sched_group_span(group));
7196 for_each_cpu_and(i, sched_group_span(group), p->cpus_ptr) {
7257 struct sched_group *group;
7266 group = find_idlest_group(sd, p, cpu);
7267 if (!group) {
7272 new_cpu = find_idlest_group_cpu(group, p, cpu);
7747 * range. Otherwise a group of CPUs (CPU0 util = 121% + CPU1 util = 80%)
8255 * Balances load by selecting the idlest CPU in the idlest group, or under
8513 * group (e.g. via set_curr_task), since update_curr() (in the
8540 * Preempt an idle group in favor of a non-idle group (and don't preempt
8866 * `- size of each group
8932 * 'group_type' describes the group of CPUs at the moment of load balancing.
8934 * The enum is ordered by pulling priority, with the group with lowest priority
8936 * group. See update_sd_pick_busiest().
8939 /* The group has spare capacity that can be used to run more tasks. */
8942 * The group is fully used and the tasks don't compete for more CPU
8952 * Balance SMT group that's fully busy. Can benefit from migration
9600 * group is a fraction of its parents load.
9687 unsigned long avg_load; /*Avg load across the CPUs of the group */
9688 unsigned long group_load; /* Total load over the CPUs of the group */
9690 unsigned long group_util; /* Total utilization over the CPUs of the group */
9691 unsigned long group_runnable; /* Total runnable time over the CPUs of the group */
9692 unsigned int sum_nr_running; /* Nr of tasks running in the group */
9693 unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
9711 struct sched_group *busiest; /* Busiest group in this sd */
9712 struct sched_group *local; /* Local group in this sd */
9718 struct sg_lb_stats busiest_stat;/* Statistics of the busiest group */
9719 struct sg_lb_stats local_stat; /* Statistics of the local group */
9728 * busiest_stat::idle_cpus to the worst busiest group because
9794 struct sched_group *group, *sdg = sd->groups;
9814 * span the current group.
9830 * span the current group.
9833 group = child->groups;
9835 struct sched_group_capacity *sgc = group->sgc;
9837 sched_group_span(group);
9846 group = group->next;
9847 } while (group != child->groups);
9884 * cpumask covering 1 CPU of the first group and 3 CPUs of the second group.
9890 * If we were to balance group-wise we'd place two tasks in the first group and
9891 * two tasks in the second group. Clearly this is undesired as it will overload
9892 * cpu 3 and leave one of the CPUs in the second group unused.
9894 * The current solution to this issue is detecting the skew in the first group
9898 * When this is so detected; this group becomes a candidate for busiest; see
9901 * to create an effective group imbalance.
9904 * group imbalance and decide the groups need to be balanced again. A most
9908 static inline int sg_imbalanced(struct sched_group *group)
9910 return group->sgc->imbalance;
9914 * group_has_capacity returns true if the group has spare capacity that could
9916 * We consider that a group has spare capacity if the number of task is
9943 * group_is_overloaded returns true if the group has more tasks than it can
9945 * group_is_overloaded is not equals to !group_has_capacity because a group
9969 struct sched_group *group,
9975 if (sg_imbalanced(group))
10015 * @sds: Load-balancing data with statistics of the local group
10016 * @sgs: Load-balancing statistics of the candidate busiest group
10017 * @group: The candidate busiest group
10020 * preferred CPU of @group.
10024 * can only do it if @group is an SMT group and has exactly on busy CPU. Larger
10035 struct sched_group *group)
10045 if (group->flags & SD_SHARE_CPUCAPACITY) {
10050 return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu);
10053 /* One group has more than one SMT CPU while the other group does not */
10065 struct sched_group *group)
10071 * For SMT source group, it is better to move a task
10073 * Note that if a group has a single SMT, SD_SHARE_CPUCAPACITY
10076 if (group->flags & SD_SHARE_CPUCAPACITY &&
10110 /* Take advantage of resource in an empty sched group */
10134 * @sds: Load-balancing data with statistics of the local group.
10135 * @group: sched_group whose statistics are to be updated.
10136 * @sgs: variable to hold the statistics for this group.
10141 struct sched_group *group,
10149 local_group = group == sds->local;
10151 for_each_cpu_and(i, sched_group_span(group), env->cpus) {
10201 sgs->group_capacity = group->sgc->capacity;
10203 sgs->group_weight = group->group_weight;
10206 if (!group->group_weight) {
10210 sgs->group_weight = group->group_weight;
10214 /* Check if dst CPU is idle and preferred to this group */
10217 sched_asym(env, sds, sgs, group)) {
10221 /* Check for loaded SMT group to be balanced to dst CPU */
10222 if (!local_group && smt_balance(env, sgs, group))
10225 sgs->group_type = group_classify(env->sd->imbalance_pct, group, sgs);
10227 /* Computing avg_load makes sense only when group is overloaded */
10234 * update_sd_pick_busiest - return 1 on busiest group
10240 * Determine if @sg is a busier group than the previously selected
10241 * busiest group.
10243 * Return: %true if @sg is a busier group than the previously selected
10244 * busiest group. %false otherwise.
10260 * CPUs in the group should either be possible to resolve
10276 * The candidate and the current busiest group are the same type of
10277 * group. Let check which one is the busiest according to the type.
10282 /* Select the overloaded group with highest avg_load. */
10289 * Select the 1st imbalanced group as we don't have any way to
10311 * Check if we have spare CPUs on either SMT group to
10321 * Select the fully busy group with highest avg_load. In
10323 * group because tasks have all compute capacity that they need
10361 * Select not overloaded group with lowest number of idle cpus
10364 * that the group has less spare capacity but finally more idle
10469 * @sd: The sched_domain level to look for idlest group.
10470 * @group: sched_group whose statistics are to be updated.
10471 * @sgs: variable to hold the statistics for this group.
10472 * @p: The task for which we look for the idlest group/CPU.
10475 struct sched_group *group,
10483 /* Assume that task can't fit any CPU of the group */
10487 for_each_cpu(i, sched_group_span(group)) {
10514 sgs->group_capacity = group->sgc->capacity;
10516 sgs->group_weight = group->group_weight;
10518 sgs->group_type = group_classify(sd->imbalance_pct, group, sgs);
10521 * Computing avg_load makes sense only when group is fully busy or
10532 struct sched_group *group,
10542 * The candidate and the current idlest group are the same type of
10543 * group. Let check which one is the idlest according to the type.
10549 /* Select the group with lowest avg_load. */
10561 /* Select group with the highest max capacity */
10562 if (idlest->sgc->max_capacity >= group->sgc->max_capacity)
10567 /* Select group with most idle CPUs */
10571 /* Select group with lowest group_util */
10583 * find_idlest_group() finds and returns the least busy CPU group within the
10591 struct sched_group *idlest = NULL, *local = NULL, *group = sd->groups;
10607 /* Skip over this group if it has no CPUs allowed */
10609 if (!cpumask_intersects(sched_group_span(group),
10612 if (!cpumask_intersects(sched_group_span(group),
10617 /* Skip over this group if no cookie matched */
10618 if (!sched_group_cookie_match(cpu_rq(this_cpu), p, group))
10622 sched_group_span(group));
10626 local = group;
10631 update_sg_wakeup_stats(sd, group, sgs, p);
10633 if (!local_group && update_pick_idlest(idlest, &idlest_sgs, group, sgs)) {
10634 idlest = group;
10638 } while (group = group->next, group != sd->groups);
10641 /* There is no idlest group to push tasks to */
10645 /* The local group has been skipped because of CPU affinity */
10650 * If the local group is idler than the selected idlest group
10657 * If the local group is busier than the selected idlest group
10685 * If the local group is less loaded than the selected
10686 * idlest group don't try and push any tasks.
10702 /* Select group with the highest max capacity */
10750 * Select group with highest number of idle CPUs. We could also
10752 * up that the group has less spare capacity but finally more
10792 * imbalance_pct(117) when a LLC sched group is overloaded.
10883 * Indicate that the child domain of the busiest group prefers tasks
10884 * go to a child's sibling domains first. NB the flags of a sched group
10961 * In the group_imb case we cannot rely on group-wide averages
10972 * Try to use spare capacity of local group without overloading it or
10991 * In some cases, the group's utilization is max or even
10994 * waiting task in this overloaded busiest group. Let's
11040 * busiest group
11052 * If the local group is more loaded than the selected
11053 * busiest group don't try to pull any tasks.
11064 * If the local group is more loaded than the average system
11075 * Both group are or will become overloaded and we're trying to get all
11079 * reduce the group load below the group capacity. Thus we look for
11092 * Decision matrix according to the local and busiest group type:
11112 * find_busiest_group - Returns the busiest group within the sched_domain
11119 * Return: - The busiest group if imbalance exists.
11134 /* There is no busy sibling group to pull tasks from */
11156 * If the busiest group is imbalanced the below checks don't
11165 * If the local group is busier than the selected busiest group
11177 * If the local group is more loaded than the selected
11178 * busiest group don't try to pull any tasks.
11188 * Don't pull any tasks if this group is already above the
11195 * If the busiest group is more loaded, use imbalance_pct to be
11205 * group's child domain.
11214 * If the busiest group is not overloaded (and as a
11230 * If the busiest group is not overloaded
11232 * group wrt idle CPUs, it is balanced. The imbalance
11235 * on another group. Of course this applies only if
11236 * there is more than 1 CPU per group.
11260 * find_busiest_queue - find the busiest runqueue among the CPUs in the group.
11263 struct sched_group *group)
11270 for_each_cpu_and(i, sched_group_span(group), env->cpus) {
11292 * the next pass will adjust the group classification and
11552 /* Are we the first CPU of this group ? */
11566 struct sched_group *group;
11592 group = find_busiest_group(&env);
11593 if (!group) {
11598 busiest = find_busiest_queue(&env, group);
11617 * an imbalance but busiest->nr_running <= 1, the group is
11712 * destination group that is receiving any migrated
12069 * CPU in our sched group which is doing load balancing more
13243 /* Account for a task changing its policy or group.
13427 /* guarantee group entities always have weight */