Lines Matching refs:page

364  * The normal page dirty throttling mechanism in balance_dirty_pages() is
448 * charged to the target memcg, causing an entire page to be freed.
449 * If we count the entire page as reclaimed from the memcg, we end up
536 * The page can not be swapped.
1060 * A freeable page cache folio is referenced only by the caller
1061 * that isolated the folio, the page cache and optional filesystem
1283 res = mapping->a_ops->writepage(&folio->page, &wbc);
1327 * get_user_pages(&page);
1329 * write_to(page);
1377 * only page cache folios found in these are zero pages
1482 * All mapped folios start out with page table
1488 * inactive list. Another page table reference will
1594 * When this happens, 'page' will likely just be discarded
1685 /* folio_update_gen() tried to promote this page? */
1878 * The folio is mapped into the page tables of one or more
2211 get_pageblock_migratetype(&folio->page) == MIGRATE_CMA;
2221 * Isolating page from the lruvec to fill in @dst list by nr_to_scan times.
2345 * may need to be cleared by the caller before letting the page go.
2379 * then get rescheduled. When there are massive number of tasks doing page
2512 * device by writing to the page cache it sets PF_LOCAL_THROTTLE. In this case
3034 * If there is enough inactive page cache, we do not reclaim
3050 * anonymous page vs reloading a filesystem page (swappiness).
3157 * Make sure we don't miss the last page on
3554 * are three interesting cases for this page table walker:
3731 /* promote pages accessed through page tables */
3740 /* lru_gen_del_folio() has isolated this page? */
3766 /* folio_update_gen() has promoted this page? */
3863 * Some userspace memory allocators map many single-page VMAs. Instead of
4474 * is less efficient, but it avoids bursty page faults.
4705 /* feedback from rmap walkers to page table walkers */
6403 * true if more pages should be reclaimed such that when the page allocator
6556 * it implies that the long-lived page allocation rate
6557 * is exceeding the page laundering rate. Either the
6559 * processes due to the page distribution throughout
6593 * Legacy memcg will stall in page writeback so avoid forcibly
6652 * compaction a reasonable chance of completing and allocating the page.
6693 * This is the direct reclaim path, for page-allocating processes. We only
6739 * page allocations.
6821 * This is the main entry point to direct page reclaim.
6972 * happens, the page allocator should not consider triggering the OOM killer.
7080 * 1 is returned so that the page allocator does not OOM kill at this
7417 * found to have free_pages <= high_wmark_pages(zone), any page in that zone
7666 * Compaction records what page blocks it recently failed to
7738 * asynchronous contexts that cannot page things out.
7758 * never get caught in the normal page freeing logic.
7762 * page out something else, and this flag essentially protects
8015 /* Work out how many page cache pages we can reclaim in this reclaim_mode */