Lines Matching refs:reclaiming

450 	 * overestimating the reclaimed amount (potentially under-reclaiming).
452 * Only count such pages for global reclaim to prevent under-reclaiming
1811 * Before reclaiming the folio, try to relocate
2310 * this disrupts the LRU order when reclaiming for lower zones but
2604 * pressure reclaiming all the clean cache. And in some cases,
3047 * proportional to the cost of reclaiming each list, as
3755 static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclaiming)
3775 if (reclaiming)
6293 * Global reclaiming within direct reclaim at DEF_PRIORITY is a normal
6298 * reclaiming implies that kswapd is not keeping up and it is best to
6331 * stop reclaiming one LRU and reduce the amount scanning
6405 * It will give up earlier than that if there is difficulty reclaiming pages.
6449 * inactive lists are large enough, continue reclaiming
6724 * Take care memory controller reclaiming has small influence
6863 * If we're getting trouble reclaiming, start doing
7413 * Returns the order kswapd finished reclaiming at.
7472 * then consider reclaiming from all zones. This has a dual
7527 * referenced before reclaiming. All pages are rotated
7533 * If we're getting trouble reclaiming, start doing writepage
7572 * progress in reclaiming pages
7623 * Return the order kswapd stopped reclaiming at as
7803 * reclaim fails then kswapd falls back to reclaiming for
7805 * for the order it finished reclaiming at (reclaim_order)
7828 * pgdat. It will wake up kcompactd after reclaiming memory. If kswapd reclaim
7883 * LRU order by reclaiming preferentially