Searched refs:slabs (Results 1 - 9 of 9) sorted by relevance
/kernel/linux/linux-5.10/tools/vm/ |
H A D | slabinfo.c | 3 * Slabinfo: Tool to get reports about slabs 36 unsigned long partial, objects, slabs, objects_partial, objects_total; member 57 int slabs; variable 116 "-A|--activity Most active slabs first\n" in usage() 119 "-e|--empty Show empty slabs\n" in usage() 123 "-l|--slabs Show slabs\n" in usage() 126 "-N|--lines=K Show the first K slabs\n" in usage() 128 "-P|--partial Sort by number of partial slabs\n" in usage() 129 "-r|--report Detailed report on single slabs\ in usage() [all...] |
H A D | slabinfo-gnuplot.sh | 14 # and generate graphs (totals, slabs sorted by size, slabs sorted 36 echo "-l - plot slabs stats for FILE(s)" 151 out=`basename "$in"`"-slabs-by-loss" 160 out=`basename "$in"`"-slabs-by-size" 187 mode=slabs 232 slabs) 258 slabs)
|
/kernel/linux/linux-6.6/tools/mm/ |
H A D | slabinfo.c | 3 * Slabinfo: Tool to get reports about slabs 36 unsigned long partial, objects, slabs, objects_partial, objects_total; member 57 int slabs; variable 116 "-A|--activity Most active slabs first\n" in usage() 119 "-e|--empty Show empty slabs\n" in usage() 123 "-l|--slabs Show slabs\n" in usage() 126 "-N|--lines=K Show the first K slabs\n" in usage() 128 "-P|--partial Sort by number of partial slabs\n" in usage() 129 "-r|--report Detailed report on single slabs\ in usage() [all...] |
H A D | slabinfo-gnuplot.sh | 14 # and generate graphs (totals, slabs sorted by size, slabs sorted 36 echo "-l - plot slabs stats for FILE(s)" 151 out=`basename "$in"`"-slabs-by-loss" 160 out=`basename "$in"`"-slabs-by-size" 187 mode=slabs 232 slabs) 258 slabs)
|
/kernel/linux/linux-5.10/samples/bpf/ |
H A D | xsk_fwd.c | 90 * slabs of *n_buffers_per_slab*. Initially, there are *n_slabs* slabs in the 91 * pool that are completely filled with buffer pointers (full slabs). 94 * free, with both of these slabs initially empty. When the cache's allocation 95 * slab goes empty, it is swapped with one of the available full slabs from the 97 * swapped for one of the empty slabs from the pool, which is guaranteed to 100 * Partially filled slabs never get traded between the cache and the pool 109 u64 **slabs; member 170 bp->slabs = (u64 **)&p[sizeof(struct bpool)]; in bpool_init() 183 bp->slabs[ in bpool_init() [all...] |
/kernel/linux/linux-5.10/drivers/xen/ |
H A D | swiotlb-xen.c | 133 int slabs = min(nslabs - i, (unsigned long)IO_TLB_SEGSIZE); in xen_swiotlb_fixup() local 138 get_order(slabs << IO_TLB_SHIFT), in xen_swiotlb_fixup() 144 i += slabs; in xen_swiotlb_fixup()
|
/kernel/linux/linux-6.6/mm/ |
H A D | slub.c | 7 * and only uses a centralized lock to manage a pool of partial slabs. 60 * The role of the slab_mutex is to protect the list of all the slabs 77 * Frozen slabs 89 * the partial slab counter. If taken then no new slabs may be added or 90 * removed from the lists nor make the number of partial slabs be modified. 91 * (Note that the total number of slabs is an atomic value that may be 96 * slabs, operations can continue without any centralized lock. F.e. 97 * allocating a long series of objects that fill up slabs does not require 136 * Allocations only occur from these slabs called cpu slabs 1513 parse_slub_debug_flags(char *str, slab_flags_t *flags, char **slabs, bool init) parse_slub_debug_flags() argument 2701 int slabs = 0; put_cpu_partial() local 5646 int slabs = 0; slabs_cpu_partial_show() local 5723 SLAB_ATTR_RO(slabs); global() variable [all...] |
H A D | slab.h | 68 int slabs; /* Nr of slabs left */ member
|
/kernel/linux/linux-5.10/mm/ |
H A D | slub.c | 7 * and only uses a centralized lock to manage a pool of partial slabs. 51 * The role of the slab_mutex is to protect the list of all the slabs 69 * the partial slab counter. If taken then no new slabs may be added or 70 * removed from the lists nor make the number of partial slabs be modified. 71 * (Note that the total number of slabs is an atomic value that may be 76 * slabs, operations can continue without any centralized lock. F.e. 77 * allocating a long series of objects that fill up slabs does not require 82 * while handling per_cpu slabs, due to kernel preemption. 85 * Allocations only occur from these slabs called cpu slabs 1269 parse_slub_debug_flags(char *str, slab_flags_t *flags, char **slabs, bool init) parse_slub_debug_flags() argument 5211 SLAB_ATTR_RO(slabs); global() variable [all...] |
Completed in 15 milliseconds