Lines Matching defs:of

4  *  Processor and Memory placement constraints for sets of tasks.
17 * 2008 Rework of the scheduler domains and CPU hotplug handling
20 * This file is subject to the terms and conditions of the GNU General Public
21 * License. See the file COPYING in the main directory of the Linux
78 spinlock_t lock; /* guards read or write of above */
151 /* number of CPUs in subparts_cpus */
160 * child_ecpus_count - # of children with use_parent_ecpus set
174 * None of the cpus in cpus_allowed can be put into the parent's
273 * cpuset_for_each_child - traverse online children of a cpuset
276 * @parent_cs: target cpuset to walk children of
278 * Walk @child_cs through the online children of @parent_cs. Must be used
285 * cpuset_for_each_descendant_pre - pre-order walk of a cpuset's descendants
288 * @root_cs: target cpuset to walk ancestor of
290 * Walk @des_cs through the online descendants of @root_cs. Must be used
301 * task's cpuset pointer. See "The task_lock() exception", at the end of this
316 * from one of the callbacks into the cpuset code from within
327 * small pieces of code, such as when reading out possibly multi-word
374 * Return in pmask the portion of a task's cpusets's cpus_allowed that
375 * are online and are capable of running the task. If none are found,
380 * of cpu_active_mask.
401 * consequence of a race between cpuset_hotplug_work
416 * Return in *pmask the portion of a cpusets's mems_allowed that
422 * of node_states[N_MEMORY].
455 * is_cpuset_subset(p, q) - Is cpuset p a subset of cpuset q?
457 * One cpuset is a subset of another if all its allowed CPUs and
458 * Memory Nodes are a subset of the other, and its exclusive flags
474 * Only one of the two input arguments should be non-NULL.
580 * If we replaced the flag and mask values of the current cpuset
585 * 'cur' is the address of an actual, in-use cpuset. Operations
586 * such as list traversal that depend on the actual address of the
589 * 'trial' is the address of bulk structure copy of cur, with
590 * perhaps one or more of the fields cpus_allowed, mems_allowed,
604 /* Each of our child cpusets must be a subset of us */
616 /* On legacy hiearchy, we must be a subset of our parent cpuset. */
715 * This function builds a partial partition of the systems CPUs
716 * A 'partial partition' is a set of non-overlapping subsets whose
717 * union is a subset of that set.
718 * The output of this function needs to be passed to kernel/sched/core.c
724 * for a background explanation of this.
726 * Does not return errors, on the theory that the callers of this
735 * top-down scan of all cpusets. For our purposes, rebuilding
738 * csa - (for CpuSet Array) Array of pointers to all the cpusets
741 * i.e the set of domains (subsets) of CPUs such that the
742 * cpus_allowed of every cpuset marked is_sched_load_balance
743 * is a subset of one of these domains, while there are as
745 * doms - Conversion of 'csa' to an array of cpumasks, for passing to
751 * Finding the best partition (set of domains):
753 * load balanced cpusets (using the array of cpuset pointers in
754 * csa[]) looking for pairs of cpusets that have overlapping
760 * The union of the cpus_allowed masks from the set of
762 * element of the partition (one sched domain) to be passed to
767 struct cpuset *cp; /* top-down scan of cpusets */
768 struct cpuset **csa; /* array of all cpuset ptrs */
773 int ndoms = 0; /* number of sched domains in result */
782 /* Special case for the 99% of systems with one, full, sched domain */
817 * latter: All child cpusets contain a subset of the
819 * update_domain_attr_tree() to calc relax_domain_level of
823 * is a subset of the root's effective_cpus.
852 /* Find the best partition (set of sched domains) */
885 * The rest of the code, including the scheduler, can deal with
1011 * If the flag 'sched_load_balance' of any cpuset with non-empty
1043 * With subpartition CPUs, however, the effective CPUs of a partition
1044 * root should be only a subset of the active CPUs. Since a CPU in any
1089 * update_tasks_cpumask - Update the cpumasks of tasks in the cpuset.
1092 * Iterate through each task of @cs updating its cpus_allowed to the
1109 * compute_effective_cpumask - Compute the effective cpumask of the cpuset
1114 * If the parent has subpartition CPUs, include them in the list of
1140 * update_parent_subparts_cpumask - update subparts_cpus mask of parent cpuset
1148 * root to a partition root. The cpus_allowed mask of the given cpuset will
1162 * state may change if newmask is NULL and none of the requested CPUs can
1175 * Because of the implicit cpu exclusive nature of a partition root,
1179 * a superset of children's cpu lists.
1250 * As some of the CPUs in subparts_cpus might have
1329 * Some of the CPUs in subparts_cpus might have been offlined.
1350 * When congifured cpumask is changed, the effective cpumasks of this cpuset
1371 * If it becomes empty, inherit the effective mask of the
1416 * of CS_CPU_EXCLUSIVE anyway. So we can
1481 * On legacy hierarchy, if the effective cpumask of any non-
1538 * update_cpumask - update the cpus_allowed mask of a cpuset and all tasks in it
1541 * @buf: buffer of cpu numbers written to this cpuset
1595 /* Cpumask of a partition root cannot be empty */
1609 * Make sure that subparts_cpus is a subset of cpus_allowed.
1623 * For partition root, update the cpumasks of sibling
1634 * Migrate memory region from one set of nodes to another. This is
1709 * update_tasks_nodemask - Update the nodemasks of tasks in the cpuset.
1712 * Iterate through each task of @cs updating its mems_allowed to the
1774 * When configured nodemask is changed, the effective nodemasks of this cpuset
1793 * If it becomes empty, inherit the effective mask of the
1827 * of a cpuset. Needs to validate the request, update the
1920 * update_tasks_flags - update the spread flags of tasks in the cpuset.
1923 * Iterate through each task of @cs updating its spread flags. As this
2063 * Update cpumask of parent's tasks except when it is the top
2093 * fmeter_getrate() - returns the recent rate of such events.
2096 * A common data structure is passed to each of these routines,
2097 * which is used to keep track of the state required to manage the
2100 * The filter works on the number of events marked per unit time.
2103 * simulate 3 decimal digits of precision (multiplied by 1000).
2105 * With an FM_COEF of 933, and a time base of 1 second, the filter
2106 * has a half-life of 10 seconds, meaning that if the events quit
2115 * Limit the count of unprocessed events to FM_MAXCNT, so as to avoid
2123 * to a value N*1000, where N is the rate of events per second.
2131 #define FM_COEF 933 /* coefficient for half-life of 10 secs */
2324 /* The various types of files and directories in a cpuset file system */
2422 static ssize_t cpuset_write_resmask(struct kernfs_open_file *of, char *buf, size_t nbytes, loff_t off)
2424 struct cpuset *cs = css_cs(of_css(of));
2450 kernfs_break_active_protection(of->kn);
2465 switch (of_cft(of)->private) {
2481 kernfs_unbreak_active_protection(of->kn);
2490 * chunks, there is no guarantee of atomicity. Since the display format
2491 * used, list of ranges of sequential numbers, is variable length,
2593 static ssize_t sched_partition_write(struct kernfs_open_file *of, char *buf, size_t nbytes, loff_t off)
2595 struct cpuset *cs = css_cs(of_css(of));
2627 * for the common functions, 'private' gives the type of file
2787 * cgrp: control group that the new cpuset will be part of
2954 * Make sure the new task conform to the current state of its parent,
3034 pr_err("cpuset: failed to transfer tasks out of empty cpuset ");
3217 * order to make cpusets transparent (of no affect) on systems that are
3218 * actively using CPU hotplug but making no active use of cpusets.
3283 /* we don't mess with cpumasks of tasks in top_cpuset */
3387 * Description: Returns the cpumask_var_t cpus_allowed of the cpuset
3389 * subset of cpu_online_mask, even if this means going outside the
3459 * Description: Returns the nodemask_t mems_allowed of the cpuset
3461 * subset of node_states[N_MEMORY], even if this means going outside the
3483 * Are any of the nodes in the nodemask allowed in current->mems_allowed?
3526 * cpuset are short of memory, might require taking the callback_lock.
3531 * in interrupt, of course).
3592 * system buffers and inode caches, then instead of starting on the
3637 * @tsk1: pointer to task_struct of some task.
3638 * @tsk2: pointer to task_struct of some other task.
3641 * mems_allowed of @tsk2. Used by the OOM killer to determine if
3642 * one of the task's memory usage might impact the memory available
3654 * Description: Prints current's name, cpuset name, and cached copy of its
3672 * Collection of memory_pressure is suppressed unless
3680 * cpuset_memory_pressure_bump - keep stats of per-cpuset reclaims.
3682 * Keep a running average of the rate of synchronous (direct)
3693 * representing the recent rate of entry into the synchronous