Lines Matching defs:to

16  *  2006 Rework by Paul Menage to use generic cgroups
20 * This file is subject to the terms and conditions of the GNU General Public
89 * The user-configured masks can only be changed by writing to
93 * The effective masks is the real masks that apply to the tasks
106 /* user-configured CPUs and Memory Nodes allow to tasks */
111 /* effective CPUs and Memory Nodes allow to tasks */
116 * CPUs allocated to child sub-partitions (default hierarchy only)
128 * - top_cpuset.old_mems_allowed is initialized to mems_allowed.
133 * then old_mems_allowed is updated to mems_allowed.
140 * Tasks are being attached to this cpuset. Used to prevent
177 * and the cpuset can be restored back to a partition root if the
178 * parent cpuset can give more CPUs back to this child cpuset.
186 * functions to avoid memory allocation in inner functions.
274 * @child_cs: loop cursor pointing to the current child
276 * @parent_cs: target cpuset to walk children of
286 * @des_cs: loop cursor pointing to the current descendant
288 * @root_cs: target cpuset to walk ancestor of
292 * css_rightmost_descendant() to skip subtree. @root_cs is included in the
293 * iteration and the first node to be visited.
304 * A task must hold both locks to modify cpusets. If a task holds
306 * is the only task able to also acquire callback_lock and be able to
310 * callback routines can briefly acquire callback_lock to query cpusets.
311 * Once it is ready to make the changes, it takes callback_lock, blocking
314 * Calls to the kernel memory allocator can not be made while holding
320 * access to cpusets.
323 * by other task, we use alloc_lock in the task_struct fields to protect
379 * One way or another, we guarantee to return some non-empty subset
403 * cpuset's effective_cpus is on its way to be
404 * identical to cpu_online_mask.
421 * One way or another, we guarantee to return some non-empty subset
470 * @cs: the cpuset that have cpumasks to be allocated.
524 * @cs: the cpuset that have cpumasks to be free.
568 * @cs: the cpuset to be freed
577 * validate_change() - Used to validate that any proposed cpuset change
591 * or flags changed to new, trial values.
608 /* Remaining checks don't apply to root cpuset */
640 * be changed to have empty cpus_allowed or mems_allowed.
718 * The output of this function needs to be passed to kernel/sched/core.c
727 * routine would rather not worry about failures to rebuild sched
734 * cp - cpuset pointer, used (together with pos_css) to perform a
738 * csa - (for CpuSet Array) Array of pointers to all the cpusets
739 * that need to be load balanced, for convenient iterative
745 * doms - Conversion of 'csa' to an array of cpumasks, for passing to
747 * convenient format, that can be easily compared to the prior
748 * value to determine what partition elements (sched domains)
762 * element of the partition (one sched domain) to be passed to
819 * update_domain_attr_tree() to calc relax_domain_level of
876 * Now we know how many domains to create.
877 * Convert <csn, csa> to <ndoms, doms> and populate cpu masks.
886 * dattr==NULL case. No need to abort if alloc fails.
938 * Fallback to the default domain if kmalloc() failed.
977 * if a task belongs to it.
1014 * 'cpus' is removed, then call this routine to rebuild the
1030 * If we have raced with CPU hotplug, return early to avoid
1031 * passing doms with offlined cpu to partition_sched_domains().
1036 * is enough to detect racing CPU offlines.
1090 * @cs: the cpuset in which each task's cpus_allowed mask needs to be changed
1092 * Iterate through each task of @cs updating its cpus_allowed to the
1111 * @cs: the cpuset the need to recompute the new effective_cpus mask
1116 * CPUs are not removed from subparts_cpus, we have to use cpu_active_mask
1117 * to mask those out.
1148 * root to a partition root. The cpus_allowed mask of the given cpuset will
1154 * root back to a non-partition root. Any CPUs in cpus_allowed that are in
1159 * list is to be changed from cpus_allowed to newmask. Otherwise,
1160 * cpus_allowed is assumed to remain the same. The cpuset should either
1163 * be granted by the parent. The function will return 1 if changes to
1178 * function will also prevent any changes to the cpu list if it is not
1184 int adding; /* Moving cpus from effective_cpus to subparts_cpus */
1185 int deleting; /* Moving cpus from subparts_cpus to effective_cpus */
1251 * been offlined, we need to compute the real delmask
1252 * to confirm that.
1301 return 0; /* Nothing need to be done */
1319 * newly deleted ones will be added back to effective_cpus.
1347 * @cs: the cpuset to consider
1351 * and all its descendants need to be updated.
1372 * parent, which is guaranteed to have some CPUs.
1431 * When parent is invalid, it has to be too.
1456 * let its child partition roots to compete for
1482 * empty cpuset is changed, we need to rebuild sched domains.
1483 * On default hierarchy, the cpuset needs to be a partition
1516 * to use the right effective_cpus value.
1539 * @cs: the cpuset to consider
1541 * @buf: buffer of cpu numbers written to this cpuset
1574 /* Nothing to do if the cpus didn't change */
1587 * to allocated cpumasks.
1634 * Migrate memory region from one set of nodes to another. This is
1645 nodemask_t to;
1652 /* on a wq worker, no need to worry about %current's mems_allowed */
1653 do_migrate_pages(mwork->mm, &mwork->from, &mwork->to, MPOL_MF_MOVE_ALL);
1658 static void cpuset_migrate_mm(struct mm_struct *mm, const nodemask_t *from, const nodemask_t *to)
1666 mwork->to = *to;
1681 * @tsk: the task to change
1684 * We use the mems_allowed_seq seqlock to safely update both tsk->mems_allowed
1710 * @cs: the cpuset in which each task's mems_allowed mask needs to be changed
1712 * Iterate through each task of @cs updating its mems_allowed to the
1734 * is idempotent. Also migrate pages in each mm to new nodes.
1765 /* We're done rebinding vmas to this cpuset's new mems_allowed. */
1771 * @cs: the cpuset to consider
1775 * and all its descendants need to be updated.
1794 * parent, which is guaranteed to have some MEMs.
1826 * Handle user request to change the 'mems' memory placement
1827 * of a cpuset. Needs to validate the request, update the
1831 * migrate the tasks pages to the new memory.
1836 * their mempolicies to the cpusets new mems_allowed.
1872 retval = 0; /* Too easy - nothing to do */
1921 * @cs: the cpuset in which each task's spread flags needs to be changed
1941 * bit: the bit to update (see cpuset_flagbits_t)
1942 * cs: the cpuset to update
1994 * cs: the cpuset to update
2010 * Cannot force a partial or invalid partition root to a full
2064 * cpuset as some system daemons cannot be mapped to other CPUs.
2094 * fmeter_update() - internal routine used to update fmeter.
2096 * A common data structure is passed to each of these routines,
2097 * which is used to keep track of the state required to manage the
2102 * is 1 second. Arithmetic is done using 32-bit integers scaled to
2108 * will be cut in half each 10 seconds, until it converges to zero.
2115 * Limit the count of unprocessed events to FM_MAXCNT, so as to avoid
2123 * to a value N*1000, where N is the rate of events per second.
2127 * about one in 32 seconds, it decays all the way back to zero between
2133 #define FM_MAXCNT 1000000 /* limit cnt to avoid overflow */
2191 /* Called by cgroups to determine if a cpuset is usable; cpuset_mutex held */
2278 * fail. have a better way to handle failure here
2301 * automatically due to hotplug. In that case
2420 * Common handling for a write to a "cpus" or "mems" file.
2433 * configuration and transfers all tasks to the nearest ancestor
2436 * As writes to "cpus" or "mems" may restore @cs's execution
2443 * operation like this one can lead to a deadlock through kernfs
2489 * buffer large enough to hold the entire map. If read in smaller
2601 * Convert "root" to ENABLED, and convert "member" to DISABLED.
2860 * refuse to clone the configuration - thereby refusing the task to
2863 * users who wish to allow that scenario, then this could be
2864 * changed to grant parent->cpus_allowed-sibling_cpus_exclusive
2865 * (and likewise for mems) to the new cgroup.
2954 * Make sure the new task conform to the current state of its parent,
3015 * or memory nodes, we need to walk over the cpuset hierarchy,
3018 * cpuset to its next-highest non-empty parent.
3034 pr_err("cpuset: failed to transfer tasks out of empty cpuset ");
3054 * as the tasks will be migratecd to an ancestor.
3068 * Move tasks to the nearest ancestor with execution resources,
3116 * all its tasks are moved to the nearest ancestor with both resources.
3133 * is finished, so we won't attach a task to an empty cpuset.
3148 * Make sure that CPUs allocated to child partitions
3160 * effective_cpus or its parent becomes erroneous, we have to
3161 * transition it to the erroneous state.
3189 * back to a regular one or a partition root with no CPU allocated
3190 * from the parent may change to erroneous.
3216 * synchronized to cpu_active_mask and N_MEMORY, which is necessary in
3217 * order to make cpusets transparent (of no affect) on systems that are
3248 * isn't changed. It is extra work, but it is better to be safe.
3261 /* synchronize cpus_allowed to cpu_active_mask */
3268 * Make sure that CPUs allocated to child partitions
3286 /* synchronize mems_allowed to N_MEMORY */
3299 /* if cpus or mems changed, we need to propagate to descendants */
3333 * to a work item to avoid reverse locking order.
3367 * cpus_allowd/mems_allowed set to v2 values in the initial
3368 * cpuset_bind() call will be reset to v1 values in another
3384 * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed.
3385 * @pmask: pointer to struct cpumask variable to receive cpus_allowed set.
3388 * attached to the specified @tsk. Guaranteed to return some non-empty
3406 * @tsk: pointer to task_struct with which the scheduler is struggling
3409 * tsk->cpus_allowed, we fall back to task_cs(tsk)->cpus_allowed. In legacy
3457 * @tsk: pointer to task_struct from which to obtain cpuset->mems_allowed.
3460 * attached to the specified @tsk. Guaranteed to return some non-empty
3481 * @nodemask: the nodemask to be checked
3492 * mem_hardwall ancestor to the specified cpuset. Call holding
3511 * node is set in the nearest hardwalled cpuset ancestor to current's cpuset,
3512 * yes. If current has access to memory reserves as an oom victim, yes.
3518 * GFP_KERNEL allocations are not so marked, so can escape to the
3557 * Allow tasks that have access to memory reserves because they have
3558 * been OOM killed to get memory anywhere.
3584 * cpuset_mem_spread_node() - On which node to begin search for a file page
3585 * cpuset_slab_spread_node() - On which node to begin search for a slab page
3590 * to determine on which node to start looking, as it will for
3593 * local node to look for a free page, rather spread the starting
3596 * We don't have to worry about the returned node being offline
3599 * The routines calling guarantee_online_mems() are careful to
3601 * should not be possible for the following code to return an
3604 * the node where the search should start. The zonelist passed to
3606 * is passed an offline node, it will fall back to the local node.
3637 * @tsk1: pointer to task_struct of some task.
3638 * @tsk2: pointer to task_struct of some other task.
3641 * mems_allowed of @tsk2. Used by the OOM killer to determine if
3643 * to the other.
3655 * mems_allowed to the kernel log.
3673 * this flag is enabled by writing "1" to the special
3686 * ran low on memory on all nodes it was allowed to use, and
3687 * had to enter the kernels page reclaim code in an effort to
3691 * Display to user space in the per-cpuset read-only file
3694 * (direct) page reclaim by any task attached to the cpuset.
3709 * - No need to task_lock(tsk) on this tsk->cpuset reference, as it