Lines Matching refs:node

8  * NUMA policy allows the user to give hints in which node(s) memory should
24 * FIXME: memory is allocated starting with the first node
28 * preferred Try a specific node first before normal fallback.
34 * default Allocate on the local node first, or when on a VMA
131 * numa_map_to_online_node - Find closest online node
132 * @node: Node id to start the search
134 * Lookup the next closest node by distance if @nid is not online.
136 int numa_map_to_online_node(int node)
140 if (node == NUMA_NO_NODE || node_online(node))
141 return node;
143 min_node = node;
145 dist = node_distance(node, n);
159 int node;
164 node = numa_node_id();
165 if (node != NUMA_NO_NODE) {
166 pol = &preferred_node_policy[node];
350 int node = first_node(pol->w.user_nodemask);
352 if (node_isset(node, *nodes)) {
353 pol->v.preferred_node = node;
461 * 0 - pages are placed on the right node or queued successfully.
466 * existing page was already on a node that does not follow the
513 * 0 - pages are placed on the right node or queued successfully.
517 * on a node that does not follow the policy.
911 /* else return empty node mask for local allocation */
1064 * Migrate pages from one node to a target node.
1127 * 'source' and 'dest' bits are the same, this represents a node
1135 * if possible the dest node is not already occupied by some other
1136 * source node, minimizing the risk of overloading the memory on a
1137 * node that would happen if we migrated incoming memory to a node
1138 * before migrating outgoing memory source that same node.
1146 * the scan of tmp without finding any node that moved, much less
1147 * moved to an empty node, then there is nothing left worth migrating.
1160 * node relationship of the pages established between
1165 * this node relative relationship. In that case, skip
1166 * copying memory from a node that is in the destination
1371 /* Copy a node mask from user space. */
1432 /* Copy a kernel node mask to user space */
1765 * do so then migration (at least from node to node) is not
1880 /* Return the node id preferred by the given mempolicy, or the given id */
1889 * requested node and not break the policy.
1910 * Depending on the memory policy provide a node from which to allocate the
1916 int node = numa_mem_id();
1919 return node;
1923 return node;
1940 * first node.
1944 zonelist = &NODE_DATA(node)->node_zonelists[ZONELIST_FALLBACK];
1947 return z->zone ? zone_to_nid(z->zone) : node;
1957 * node in pol->v.nodes (starting from n=0), wrapping around if n exceeds the
1976 /* Determine a node number for interleave */
2039 * initialize the argument nodemask to contain the single node for
2158 * @node: Which node to prefer for allocation (modulo policy).
2159 * @hugepage: for hugepages try only the preferred node if possible
2170 unsigned long addr, int node, bool hugepage)
2189 int hpage_node = node;
2193 * allows the current node (or other explicitly preferred
2194 * node) we only try to allocate from the current/preferred
2195 * node and don't fall back to other nodes, as the cost of
2199 * node in its nodemask, we allocate the standard way.
2208 * First, try to allocate THP only on local node, but
2229 preferred_nid = policy_node(gfp, pol, node);
2443 * mpol_misplaced - check whether current page node is valid in policy
2449 * Lookup current policy node id for vma,addr and "compare to" page's
2450 * node id.
2453 * -1 - not misplaced, page is in the right node
2454 * node - node id where the page should be
2493 * else select nearest allowed node, if any.
2509 /* Migrate the page towards the node whose CPU is referencing it */
2549 static void sp_node_init(struct sp_node *node, unsigned long start,
2552 node->start = start;
2553 node->end = end;
2554 node->policy = pol;
2810 * fall back to the largest node if they're all smaller.
2816 /* Preserve the largest node */
2822 /* Interleave this node? */
2900 * Insist on a nodelist of one node only, although later
2901 * we use first_node(nodes) to grab a single node, so here
3001 * longest flag, "relative", and to display at least a few node ids.