Lines Matching defs:partial

7  * and only uses a centralized lock to manage a pool of partial slabs.
80 * on any list except per cpu partial list. The processor that froze the
88 * The list_lock protects the partial and full list on each node and
89 * the partial slab counter. If taken then no new slabs may be added or
90 * removed from the lists nor make the number of partial slabs be modified.
95 * much as possible. As long as SLUB does not have to handle partial
138 * Slabs with free elements are kept on a partial list and during regular
140 * freed then the slab will show up again on the partial lists.
253 * Minimum number of partial slabs. These will be left on the partial
259 * Maximum number of desirable partial slabs.
260 * The existence of more partial slabs makes kmem_cache_shrink
261 * sort the partial list by the number of objects in use.
495 * slabs on the per cpu partial list, in order to limit excessive
2127 list_add_tail(&slab->slab_list, &n->partial);
2129 list_add(&slab->slab_list, &n->partial);
2149 * slab from the n->partial list. Remove only a single object from the slab, do
2180 * and put the slab to the partial (or full) list.
2217 * Remove slab from the partial list, freeze it and
2270 * Try to allocate a partial slab from a specific node.
2281 * Racy check. If we mistakenly see no partial slabs then we
2283 * partial slab and there is none available then get_partial()
2290 list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) {
2347 * instead of attempting to obtain partial slabs from other nodes.
2351 * may return off node objects because partial slabs are obtained
2357 * This means scanning over all nodes to look for partial slabs which
2394 * Get a partial slab, lock it and return it.
2661 * Unfreeze all the cpu partial slabs.
2669 partial_slab = this_cpu_read(s->cpu_slab->partial);
2670 this_cpu_write(s->cpu_slab->partial, NULL);
2683 c->partial = NULL;
2691 * partial slab slot if available.
2694 * per node partial list.
2705 oldslab = this_cpu_read(s->cpu_slab->partial);
2711 * per node partial list. Postpone the actual unfreezing
2726 this_cpu_write(s->cpu_slab->partial, slab);
2974 list_for_each_entry(slab, &n->partial, slab_list)
3084 * If that is not working then we fall back to the partial lists. We take the
3088 * And if we were unable to get a new slab from the partial slab lists then
3202 /* we were preempted and partial list got empty */
3556 * If the slab is empty, and node's partial list is full,
3558 * partial list.
3597 * lock and free the item. If there is no additional partial slab
3677 * per cpu partial list.
3690 * Objects left in the slab. If it was not on the partial list before
3704 * Slab on the partial list.
4072 * must be moved on and off the partial lists and is therefore a factor in
4078 * and slab fragmentation. A higher order reduces the number of partial slabs
4099 * activity on the partial lists which requires taking the list_lock. This is
4207 INIT_LIST_HEAD(&n->partial);
4342 * per cpu partial lists of a processor.
4344 * Per cpu partial lists mainly contain slabs that just have one
4347 * per node partial lists and therefore no locking will be required.
4539 * The larger the object size is, the more slabs we want on the partial
4592 * Attempt to free all partial slabs on a node.
4594 * because sysfs file might still access partial list after the shutdowning.
4603 list_for_each_entry_safe(slab, h, &n->partial, slab_list) {
4792 * up most to the head of the partial lists. New allocations will then
4793 * fill those up and thus they can be removed from the partial lists.
4824 list_for_each_entry_safe(slab, t, &n->partial, slab_list) {
4843 * partial list.
4846 list_splice(promote + i, &n->partial);
5010 list_for_each_entry(p, &n->partial, slab_list)
5175 list_for_each_entry(slab, &n->partial, slab_list) {
5180 pr_err("SLUB %s: %ld partial slabs counted but counter=%ld\n",
5629 SLAB_ATTR_RO(partial);
6425 list_for_each_entry(slab, &n->partial, slab_list)