Lines Matching defs:freed

441 	 * freed outside of vmscan:
444 * (3) XFS freed buffer pages.
448 * charged to the target memcg, causing an entire page to be freed.
456 * For uncommon cases where the freed pages were actually mostly
458 * amount. This should be fine. The freed pages will be uncharged
765 unsigned long freed = 0;
831 freed += ret;
855 trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan);
856 return freed;
864 unsigned long ret, freed = 0;
922 freed += ret;
925 freed = freed ? : 1;
931 return freed;
965 unsigned long ret, freed = 0;
991 freed += ret;
998 freed = freed ? : 1;
1006 return freed;
1011 unsigned long freed = 0;
1016 freed += shrink_slab(GFP_KERNEL, nid, memcg, 0);
1019 return freed;
1026 unsigned long freed;
1029 freed = 0;
1034 freed += drop_slab_node(nid);
1036 } while ((freed >> shift++) > 1);
1074 * prevents it from being freed up. But we have a ref on the folio and once
1824 * Lazyfree folio could be freed directly
1839 * tail pages can be freed without IO.
1993 * and mark the folio clean - it can be freed.
2000 * (refcount == 1) it can be freed. Otherwise, leave
2033 * folio will be freed anyway. It doesn't matter
2289 * sure the folio is not being freed elsewhere -- the
2423 * On return, @list is reused as a list of folios to be freed by the caller.
3307 * freed after the RCU grace period and reallocated if needed again.
6650 * are potentially other callers using the pages just freed. So proceed
7674 * We have freed the memory, now we should compact it to make
7880 * freed pages.
8080 * priorities until we have enough memory freed.