Lines Matching defs:partial

7  * and only uses a centralized lock to manage a pool of partial slabs.
62 * on any list except per cpu partial list. The processor that froze the
68 * The list_lock protects the partial and full list on each node and
69 * the partial slab counter. If taken then no new slabs may be added or
70 * removed from the lists nor make the number of partial slabs be modified.
75 * much as possible. As long as SLUB does not have to handle partial
87 * Slabs with free elements are kept on a partial list and during regular
89 * freed then the slab will show up again on the partial lists.
163 * Mininum number of partial slabs. These will be left on the partial
169 * Maximum number of desirable partial slabs.
170 * The existence of more partial slabs makes kmem_cache_shrink
171 * sort the partial list by the number of objects in use.
1884 list_add_tail(&page->slab_list, &n->partial);
1886 list_add(&page->slab_list, &n->partial);
1905 * Remove slab from the partial list, freeze it and
1954 * Try to allocate a partial slab from a specific node.
1965 * Racy check. If we mistakenly see no partial slabs then we
1967 * partial slab and there is none available then get_partial()
1974 list_for_each_entry_safe(page, page2, &n->partial, slab_list) {
2020 * instead of attempting to obtain partial slabs from other nodes.
2024 * may return off node objects because partial slabs are obtained
2030 * This means scanning over all nodes to look for partial slabs which
2067 * Get a partial page, lock it and return it.
2307 * Unfreeze all the cpu partial slabs.
2376 * partial page slot if available.
2379 * per node partial list.
2392 oldpage = this_cpu_read(s->cpu_slab->partial);
2400 * partial array is full. Move the existing
2401 * set to the per node partial list.
2420 } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
2527 list_for_each_entry(page, &n->partial, slab_list)
2656 * If that is not working then we fall back to the partial lists. We take the
2660 * And if we were unable to get a new slab from the partial slab lists then
2966 * lock and free the item. If there is no additional partial page
3042 * per cpu partial list.
3055 * Objects left in the slab. If it was not on the partial list before
3069 * Slab on the partial list.
3366 * must be moved on and off the partial lists and is therefore a factor in
3372 * and slab fragmentation. A higher order reduces the number of partial slabs
3392 * activity on the partial lists which requires taking the list_lock. This is
3486 INIT_LIST_HEAD(&n->partial);
3619 * per cpu partial lists of a processor.
3621 * Per cpu partial lists mainly contain slabs that just have one
3624 * per node partial lists and therefore no locking will be required.
3628 * A) The number of objects from per cpu partial slabs dumped to the
3630 * B) The number of objects in cpu partial slabs to extract from the
3817 * The larger the object size is, the more pages we want on the partial
3870 * Attempt to free all partial slabs on a node.
3872 * because sysfs file might still access partial list after the shutdowning.
3881 list_for_each_entry_safe(page, h, &n->partial, slab_list) {
4136 * up most to the head of the partial lists. New allocations will then
4137 * fill those up and thus they can be removed from the partial lists.
4169 list_for_each_entry_safe(page, t, &n->partial, slab_list) {
4187 * partial list.
4190 list_splice(promote + i, &n->partial);
4351 list_for_each_entry(p, &n->partial, slab_list)
4556 list_for_each_entry(page, &n->partial, slab_list) {
4561 pr_err("SLUB %s: %ld partial slabs counted but counter=%ld\n",
4756 list_for_each_entry(page, &n->partial, slab_list)
5119 SLAB_ATTR_RO(partial);