Lines Matching defs:cache

23  * The memory is organized in caches, one cache for each object type.
25 * Each cache consists out of many slabs (they are small (usually one
33 * Each cache can only support one memory type (GFP_DMA, GFP_HIGHMEM,
35 * cache for that memory type.
45 * kmem_cache_destroy() CAN CRASH if you try to allocate from the cache
48 * Each cache has a short per-cpu head array, most allocs
50 * of the entries in the array are given back into the global cache.
51 * The head array is strictly LIFO and should improve the cache hit rates.
63 * The non-constant members are protected with a per-cache irq spinlock.
72 * The global cache-chain is protected by the mutex 'slab_mutex'.
73 * The sem is only needed when accessing/extending the cache-chain, which
77 * At present, each engine can be growing a cache. This should be blocked.
94 #include <linux/cache.h>
177 * - LIFO ordering, to hand out cache-warm objects from _alloc
181 * The limit is stored in the per-cpu structure to reduce the data cache
210 static int drain_freelist(struct kmem_cache *cache,
374 static inline void *index_to_obj(struct kmem_cache *cache,
377 return slab->s_mem + cache->size * idx;
381 /* internal cache of cache description objs */
441 pr_err("slab error in %s(): cache `%s': %s\n",
507 * the CPUs getting into lockstep and contending for the global cache chain
542 * cache the pointers are not cleared and they could be counted as
784 * cache on this cpu.
844 * Allocates and initializes node for a node on each slab cache, used for
990 * the respective cache's slabs, now we can go ahead and
1048 * kmem_cache_node of any cache. This is to avoid a race between cpu_down, and
1071 * Shutdown cache reaper. Note that the slab_mutex is held so
1084 * Drains freelist for a node on each slab cache, used for memory hot-remove.
1168 * For setting up all the kmem_cache_node for cache whose buffer_size is same as
1209 * 1) initialize the kmem_cache cache: it contains the struct
1215 * 2) Create the first kmalloc cache.
1216 * The struct kmem_cache for the new cache is allocated normally.
1221 * kmalloc cache with kmalloc allocated arrays.
1223 * the other cache's with kmalloc allocated memory.
1320 pr_warn(" cache: %s, object size: %d, order: %d\n",
1592 * @cachep: cache pointer being destroyed
1596 * Before calling the slab must have been unlinked from the cache. The
1634 * @cachep: pointer to the cache that is being created
1635 * @size: size of objects to be created in this cache.
1751 /* Creation of first cache (kmem_cache). */
1887 * __kmem_cache_create - Create a cache.
1888 * @cachep: cache management descriptor
1901 * %SLAB_HWCACHE_ALIGN - Align the objects in this cache to a hardware
2170 static int drain_freelist(struct kmem_cache *cache,
2193 * to the cache.
2195 n->free_objects -= cache->num;
2197 slab_destroy(cache, slab);
2262 * For a slab cache when the slab descriptor is off-slab, the
2263 * slab descriptor can't come from the same cache which is being created,
2326 * cache which they are a constructor for. Otherwise, deadlock.
2505 pr_err("slab: double free detected in cache '%s', objp %px\n",
2519 * Grow (by 1) the number of slabs within a cache. This is called by
2520 * kmem_cache_alloc() when there are no active objs left in a cache.
2643 static inline void verify_redzone_free(struct kmem_cache *cache, void *obj)
2647 redzone1 = *dbg_redzone1(cache, obj);
2648 redzone2 = *dbg_redzone2(cache, obj);
2657 slab_error(cache, "double free detected");
2659 slab_error(cache, "memory outside object was overwritten");
2873 * If there was little recent activity on this cache, then
3050 static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags)
3077 get_node(cache, nid) &&
3078 get_node(cache, nid)->free_objects) {
3079 obj = ____cache_alloc_node(cache,
3093 slab = cache_grow_begin(cache, flags, numa_mem_id());
3094 cache_grow_end(cache, slab);
3097 obj = ____cache_alloc_node(cache,
3345 * Release an obj back to its cache. If the obj has a constructed state, it must
3392 * This will avoid cache misses that happen while accessing slabp (which
3395 * the cache.
3496 * @cachep: The cache to allocate from.
3569 * @cachep: The cache the allocation was from.
3573 * cache.
3722 * - create a LIFO ordering, i.e. return objects that are cache-warm
3807 * If we cannot acquire the cache chain mutex then just give up - we'll try
3967 /* Find the cache in the chain of caches. */
3993 * cache's usercopy region.
3995 * Returns NULL if check passes, otherwise const char * to name of cache