Lines Matching defs:slabs
7 * and only uses a centralized lock to manage a pool of partial slabs.
60 * The role of the slab_mutex is to protect the list of all the slabs
77 * Frozen slabs
89 * the partial slab counter. If taken then no new slabs may be added or
90 * removed from the lists nor make the number of partial slabs be modified.
91 * (Note that the total number of slabs is an atomic value that may be
96 * slabs, operations can continue without any centralized lock. F.e.
97 * allocating a long series of objects that fill up slabs does not require
136 * Allocations only occur from these slabs called cpu slabs.
139 * operations no list for full slabs is used. If an object in a full slab is
141 * We track full slabs for debugging purposes though because otherwise we
157 * One use of this flag is to mark slabs that are
253 * Minimum number of partial slabs. These will be left on the partial
259 * Maximum number of desirable partial slabs.
260 * The existence of more partial slabs makes kmem_cache_shrink
495 * slabs on the per cpu partial list, in order to limit excessive
496 * growth of the list. For simplicity we assume that the slabs will
1355 * Tracking of fully allocated slabs for debugging purposes.
1507 * @slabs: return start of list of slabs, or NULL when there's no list
1513 parse_slub_debug_flags(char *str, slab_flags_t *flags, char **slabs, bool init)
1523 * No options but restriction on slabs. This means full
1524 * debugging for slabs matching a pattern.
1569 *slabs = ++str;
1571 *slabs = NULL;
1622 * slabs means debugging is only changed for those slabs, so the global
1658 * then only the select slabs will receive the debug option(s).
2120 * Management of partially allocated slabs.
2281 * Racy check. If we mistakenly see no partial slabs then we
2347 * instead of attempting to obtain partial slabs from other nodes.
2351 * may return off node objects because partial slabs are obtained
2357 * This means scanning over all nodes to look for partial slabs which
2488 * unfreezes the slabs and puts it on the proper list.
2661 * Unfreeze all the cpu partial slabs.
2701 int slabs = 0;
2708 if (drain && oldslab->slabs >= s->cpu_partial_slabs) {
2717 slabs = oldslab->slabs;
2721 slabs++;
2723 slab->slabs = slabs;
2861 * Use the cpu notifier to insure that the cpu slabs are flushed when
3012 pr_warn(" node %d: slabs: %ld, objs: %ld, free: %ld\n",
3594 * have a longer lifetime than the cpu slabs in most processing loads.
3654 * other processors updating the list of slabs.
4071 * Increasing the allocation order reduces the number of times that slabs
4078 * and slab fragmentation. A higher order reduces the number of partial slabs
4093 * be problematic to put into order 0 slabs because there may be too much
4100 * less a concern for large slabs though which are rarely used.
4344 * Per cpu partial lists mainly contain slabs that just have one
4539 * The larger the object size is, the more slabs we want on the partial
4592 * Attempt to free all partial slabs on a node.
4791 * kmem_cache_shrink discards empty slabs and promotes the slabs filled
4795 * The slabs with the least items are placed last. This results in them
4819 * Build lists of slabs to discard or promote.
4830 /* We do not keep full slabs on the list */
4842 * Promote the slabs filled up most to the head of the
4850 /* Release empty slabs */
4984 * Basic setup of slabs
5061 /* Now we can use the kmem_cache to allocate kmalloc slabs */
5180 pr_err("SLUB %s: %ld partial slabs counted but counter=%ld\n",
5193 pr_err("SLUB: %s %ld slabs counted but counter=%ld\n",
5393 SL_ALL, /* All slabs */
5394 SL_PARTIAL, /* Only partially allocated slabs */
5395 SL_CPU, /* Only slabs used for cpu caches */
5396 SL_OBJECTS, /* Determine allocated objects not slabs */
5397 SL_TOTAL /* Determine object capacity not slabs */
5452 x = slab->slabs;
5646 int slabs = 0;
5657 slabs += slab->slabs;
5661 /* Approximate half-full slabs, see slub_set_cpu_partial() */
5662 objects = (slabs * oo_objects(s->oo)) / 2;
5663 len += sysfs_emit_at(buf, len, "%d(%d)", objects, slabs);
5671 slabs = READ_ONCE(slab->slabs);
5672 objects = (slabs * oo_objects(s->oo)) / 2;
5674 cpu, objects, slabs);
5723 SLAB_ATTR_RO(slabs);
6112 * get here for aliasable slabs so we do not need to support