Lines Matching defs:it

98      * and if it ends up empty, it will inherit the parent's mask.
130 * task is moved into it.
305 * cpuset_mutex, then it blocks others wanting that mutex, ensuring that it
309 * just holding cpuset_mutex. While it is performing these checks, various
311 * Once it is ready to make the changes, it takes callback_lock, blocking
319 * If a task is only holding callback_lock, then it has read-only
757 * looping on the 'restart' label until it can no longer find
822 * If root is load-balancing, we can skip @cp if it
952 struct css_task_iter it;
955 css_task_iter_start(&cs->css, 0, &it);
957 while ((task = css_task_iter_next(&it))) {
961 css_task_iter_end(&it);
976 * Clear default root domain DL accounting, it will be computed again
977 * if a task belongs to it.
1098 struct css_task_iter it;
1101 css_task_iter_start(&cs->css, 0, &it);
1102 while ((task = css_task_iter_next(&it))) {
1105 css_task_iter_end(&it);
1178 * function will also prevent any changes to the cpu list if it is not
1371 * If it becomes empty, inherit the effective mask of the
1431 * When parent is invalid, it has to be too.
1538 * update_cpumask - update the cpus_allowed mask of a cpuset and all tasks in it
1548 /* top_cpuset.cpus_allowed tracks cpu_online_mask; it's read-only */
1635 * performed asynchronously as it can be called from process migration path
1686 * parallel, it might temporarily see an empty intersection, which results in
1719 struct css_task_iter it;
1736 css_task_iter_start(&cs->css, 0, &it);
1737 while ((task = css_task_iter_next(&it))) {
1757 css_task_iter_end(&it);
1793 * If it becomes empty, inherit the effective mask of the
1844 * it's read-only
1929 struct css_task_iter it;
1932 css_task_iter_start(&cs->css, 0, &it);
1933 while ((task = css_task_iter_next(&it))) {
1936 css_task_iter_end(&it);
2063 * Update cpumask of parent's tasks except when it is the top
2108 * will be cut in half each 10 seconds, until it converges to zero.
2121 * per msec it maxes out at values just under 1,000,000. At constant
2122 * rates between one per msec, and one per second it will stabilize
2125 * it will be choppy, moving up on the seconds that have an event,
2127 * about one in 32 seconds, it decays all the way back to zero between
2247 * but we can't allocate it dynamically there. Define it global and
2442 * cgroup_transfer_tasks() and waiting for it from a cgroupfs
2862 * clone() which initiated it. If this becomes a problem for some
2955 * which could have been changed by cpuset just after it inherits the
2956 * state from the parent and before it sits on the cgroup's task list.
3161 * transition it to the erroneous state.
3246 * If subparts_cpus is populated, it is likely that the check below
3248 * isn't changed. It is extra work, but it is better to be safe.
3412 * This is the absolute last resort for the scheduler and it is only used if
3432 * We own tsk->cpus_allowed, nobody can change it under us.
3440 * If we are called after it dropped the lock we must see all
3442 * set any mask even if it is not right from task_cs() pov,
3510 * current's mems_allowed, yes. If it's not a __GFP_HARDWALL request and this
3523 * _not_ set if it's a GFP_KERNEL allocation, and all nodes in the
3590 * to determine on which node to start looking, as it will for
3597 * because "it can't happen", and even if it did, it would be ok.
3600 * only set nodes in task->mems_allowed that are online. So it
3602 * offline node. But if it did, that would be ok, as this routine
3606 * is passed an offline node, it will fall back to the local node.
3686 * ran low on memory on all nodes it was allowed to use, and
3709 * - No need to task_lock(tsk) on this tsk->cpuset reference, as it
3710 * doesn't really matter if tsk->cpuset changes after we read it,
3711 * and we take cpuset_mutex, keeping cpuset_attach() from changing it