Lines Matching defs:slabs
7 * and only uses a centralized lock to manage a pool of partial slabs.
51 * The role of the slab_mutex is to protect the list of all the slabs
69 * the partial slab counter. If taken then no new slabs may be added or
70 * removed from the lists nor make the number of partial slabs be modified.
71 * (Note that the total number of slabs is an atomic value that may be
76 * slabs, operations can continue without any centralized lock. F.e.
77 * allocating a long series of objects that fill up slabs does not require
82 * while handling per_cpu slabs, due to kernel preemption.
85 * Allocations only occur from these slabs called cpu slabs.
88 * operations no list for full slabs is used. If an object in a full slab is
90 * We track full slabs for debugging purposes though because otherwise we
106 * One use of this flag is to mark slabs that are
163 * Mininum number of partial slabs. These will be left on the partial
169 * Maximum number of desirable partial slabs.
170 * The existence of more partial slabs makes kmem_cache_shrink
1047 * Tracking of fully allocated slabs for debugging purposes.
1068 /* Tracking of the number of slabs for debugging purposes */
1263 * @slabs: return start of list of slabs, or NULL when there's no list
1269 parse_slub_debug_flags(char *str, slab_flags_t *flags, char **slabs, bool init)
1279 * No options but restriction on slabs. This means full
1280 * debugging for slabs matching a pattern.
1325 *slabs = ++str;
1327 *slabs = NULL;
1376 * slabs means debugging is only changed for those slabs, so the global
1408 * then only the select slabs will receive the debug option(s).
1877 * Management of partially allocated slabs.
1965 * Racy check. If we mistakenly see no partial slabs then we
2020 * instead of attempting to obtain partial slabs from other nodes.
2024 * may return off node objects because partial slabs are obtained
2030 * This means scanning over all nodes to look for partial slabs which
2261 * slabs from diagnostic functions will not see
2262 * any frozen slabs.
2307 * Unfreeze all the cpu partial slabs.
2475 * Use the cpu notifier to insure that the cpu slabs are flushed when
2565 pr_warn(" node %d: slabs: %ld, objs: %ld, free: %ld\n",
2963 * have a longer lifetime than the cpu slabs in most processing loads.
3019 * other processors updating the list of slabs.
3365 * Increasing the allocation order reduces the number of times that slabs
3372 * and slab fragmentation. A higher order reduces the number of partial slabs
3386 * be problematic to put into order 0 slabs because there may be too much
3393 * less a concern for large slabs though which are rarely used.
3621 * Per cpu partial lists mainly contain slabs that just have one
3628 * A) The number of objects from per cpu partial slabs dumped to the
3630 * B) The number of objects in cpu partial slabs to extract from the
3870 * Attempt to free all partial slabs on a node.
4135 * kmem_cache_shrink discards empty slabs and promotes the slabs filled
4139 * The slabs with the least items are placed last. This results in them
4164 * Build lists of slabs to discard or promote.
4175 /* We do not keep full slabs on the list */
4186 * Promote the slabs filled up most to the head of the
4194 /* Release empty slabs */
4238 * if n->nr_slabs > 0, slabs still exist on the node
4325 * Basic setup of slabs
4390 /* Now we can use the kmem_cache to allocate kmalloc slabs */
4561 pr_err("SLUB %s: %ld partial slabs counted but counter=%ld\n",
4572 pr_err("SLUB: %s %ld slabs counted but counter=%ld\n",
4745 /* Push back cpu slabs */
4876 SL_ALL, /* All slabs */
4877 SL_PARTIAL, /* Only partially allocated slabs */
4878 SL_CPU, /* Only slabs used for cpu caches */
4879 SL_OBJECTS, /* Determine allocated objects not slabs */
4880 SL_TOTAL /* Determine object capacity not slabs */
5211 SLAB_ATTR_RO(slabs);
5571 * get here for aliasable slabs so we do not need to support