Lines Matching defs:page
70 * finished 'unifying' the page and buffer cache and SMP-threaded the
71 * page-cache, 21.05.1999, Ingo Molnar <mingo@redhat.com>
146 /* Leave page->index set: truncation lookup relies upon it */
157 pr_alert("BUG: Bad page cache in process %s pfn:%05lx\n",
159 dump_page(&folio->page, "still mapped when deleted");
164 int mapcount = page_mapcount(&folio->page);
169 * a good bet that actually the page is unmapped
171 * another bad page check should catch it later.
173 page_mapcount_reset(&folio->page);
179 /* hugetlb folios do not participate in page cache accounting. */
201 * occur when a driver which did get_user_pages() sets page dirty
215 * Delete a page from the page cache and free it. Caller has to make
216 * sure the page is locked and that nobody else uses it - or that usage
243 * filemap_remove_folio - Remove folio from page cache.
247 * verified to be in the page cache. It will never put the folio into
248 * the free list because the caller has a reference on the page.
267 * page_cache_delete_batch - delete several folios from page cache
273 * by page index and is optimised for it to be dense.
296 * A page got inserted in our range? Skip it. We have our
298 * If we see a page whose index is higher than ours, it
299 * means our page has been removed, which shouldn't be
408 * these two operations is that if a dirty page/buffer is encountered, it must
461 * filemap_range_has_page - check if a page exists in range.
466 * Find at least one page in the range supplied, usually used to check if
469 * Return: %true if at least one page exists in the specified range,
491 * We don't need to try to pin this page; we're about to
493 * there was a page here recently.
828 /* hugetlb pages do not participate in page cache accounting. */
906 /* hugetlb pages do not participate in page cache accounting */
926 /* Leave page->index set: truncation relies upon it */
1025 * sure the appropriate page became available, this saves space
1049 * The page wait code treats the "wait->flags" somewhat unusually, because
1166 * It's possible to miss clearing waiters here, when we woke our page
1170 * Note that, depending on the page pool (buddy, hugetlb, ZONE_DEVICE,
1171 * other), the flag may be cleared in the course of freeing the page;
1191 EXCLUSIVE, /* Hold ref to page and take the bit when woken, like
1194 SHARED, /* Hold ref to page and check the bit when woken, like
1197 DROP, /* Drop ref to page before wait, no check when woken,
1255 * page bit synchronously.
1261 * page queue), and add ourselves to the wait
1276 * see whether the page bit testing has already
1364 * Wait for a migration entry referencing the given page to be removed. This is
1365 * equivalent to put_and_wait_on_page_locked(page, TASK_UNINTERRUPTIBLE) except
1366 * this can be called without taking a reference on the page. Instead this
1368 * the page.
1406 * If a migration entry exists for the page the migration path must hold
1407 * a valid reference to the page, and it must take the ptl to remove the
1408 * migration entry. So the page is valid until the ptl is dropped.
1455 * The caller should hold a reference on @folio. They expect the page to
1514 * Unlocks the folio and wakes up any thread sleeping on the page lock.
1713 * page_cache_next_miss() - Find the next gap in the page cache.
1749 * page_cache_prev_miss() - Find the previous gap in the page cache.
1785 * Lockless page cache protocol:
1792 * A. Freeze the page (by zeroing the refcount if nobody else has a reference)
1793 * B. Remove the page from i_pages
1794 * C. Return the page to the page allocator
1796 * This means that any page may have its reference count temporarily
1797 * increased by a speculative page cache (or fast GUP) lookup as it can
1800 * last refcount on the page, any page allocation must be freeable by
1805 * filemap_get_entry - Get a page cache entry.
1807 * @index: The page cache index.
1809 * Looks up the page cache entry at @mapping & @index. If it is a folio,
1828 * A shadow entry of a recently evicted page, or a swap entry from
1829 * shmem/tmpfs. Return it without attempting to raise page count.
1850 * @index: The page index.
1854 * Looks up the page cache entry at @mapping & @index.
1885 /* Has the page been truncated? */
1956 * filemap_add_folio locks the page, and for mmap
1957 * we expect an unlocked page.
1983 * A shadow entry of a recently evicted page, a swap
1985 * without attempting to raise page count.
2007 * @start: The starting page cache index
2008 * @end: The final page index (inclusive).
2053 * @start: The starting page cache index.
2054 * @end: The final page index (inclusive).
2117 * @start: The starting page index
2118 * @end: The final page index (inclusive)
2129 * page cache. If folios are added to or removed from the page cache
2157 * We come here when there is no page beyond @end. We take care to not
2159 * breaks the iteration when there is a page at index -1 but that is
2176 * @start: The starting page index
2177 * @end: The final page index (inclusive)
2249 * @start: The starting page index
2250 * @end: The final page index (inclusive)
2269 * is lockless so there is a window for page reclaim to evict
2270 * a page we saw tagged. Skip over it.
2284 * We come here when there is no page beyond @end. We take care to not
2286 * breaks the iteration when there is a page at index -1 but that is
2379 /* Start the actual read. The read will unlock the page. */
2492 * pagecache folios after evicting page cache during truncate
2495 * the page cache as the locked folio would then be enough to
2546 /* "last_index" is the index of the page beyond the end of the read */
2605 * filemap_read - Read data from the page cache.
2610 * Copies data from the page cache. If the data is not currently present,
2661 * part of the page is not copied back to userspace (unless
2671 * block_write_end()->mark_buffer_dirty() or other page
2673 * changes to page contents are visible before we see
2769 * the new data. We invalidate clean cached page from the region we're
2783 * that can use the page cache directly.
2850 struct page *page;
2853 page = folio_page(folio, offset / PAGE_SIZE);
2864 .page = page,
2870 page++;
2937 * part of the page is not copied back to userspace (unless
3031 * mapping_seek_hole_data - Seek for SEEK_DATA / SEEK_HOLE in the page cache.
3037 * If the page cache knows which blocks contain holes and which blocks
3097 * lock_folio_maybe_drop_mmap - lock the page, possibly dropping the mmap_lock
3142 * Synchronous readahead happens when we don't even find a page in the page
3213 * Asynchronous readahead happens when we find the page and PG_readahead,
3242 * filemap_fault - read in file data for page fault handling
3246 * mapped memory region to read in file data during a page fault.
3249 * it in the page cache, and handles the special cases reasonably without
3281 * Do we have something in the page cache already?
3286 * We found the page, so try async readahead before waiting for
3296 /* No page in the page cache at all */
3333 * We have a locked page in the page cache, now we need to check
3338 * The page was in cache and uptodate and now it is not.
3339 * Strange but possible since we didn't hold the page lock all
3364 * Found the page and have a reference on it.
3365 * We must recheck i_size under page lock.
3374 vmf->page = folio_file_page(folio, index);
3379 * Umm, take care of errors if the page isn't up-to-date.
3400 * page.
3417 /* Huge page is mapped? No need to proceed. */
3425 struct page *page = folio_file_page(folio, start);
3426 vm_fault_t ret = do_set_pmd(vmf, page);
3428 /* The page is mapped successfully, reference consumed. */
3457 /* Has the page moved or been split? */
3482 * Map page range [start_page, start_page + nr_pages) of folio.
3491 struct page *page = folio_page(folio, start);
3496 if (PageHWPoison(page + count))
3513 set_pte_range(vmf, folio, page, count, addr);
3520 page += count;
3527 set_pte_range(vmf, folio, page, count, addr);
3543 struct page *page = &folio->page;
3545 if (PageHWPoison(page))
3561 set_pte_range(vmf, folio, page, 1, addr);
3634 struct folio *folio = page_folio(vmf->page);
3744 /* Someone else locked and filled the page in a very small window */
3765 * read_cache_folio - Read into page cache, fill it if needed.
3771 * Read one page into the page cache. If it succeeds, the folio returned
3772 * will contain @index, but it may not be the first page of the folio.
3789 * mapping_read_folio_gfp - Read into page cache, using specified allocation flags.
3792 * @gfp: The page allocator flags to use if allocating.
3812 static struct page *do_read_cache_page(struct address_space *mapping,
3819 return &folio->page;
3823 struct page *read_cache_page(struct address_space *mapping,
3832 * read_cache_page_gfp - read into page cache, using specified page allocation flags.
3833 * @mapping: the page's address_space
3834 * @index: the page index
3835 * @gfp: the page allocator flags to use if allocating
3838 * any new page allocations done using the specified allocation flags.
3840 * If the page does not get brought uptodate, return -EIO.
3844 * Return: up to date page on success, ERR_PTR() on failure.
3846 struct page *read_cache_page_gfp(struct address_space *mapping,
3855 * Warn about a page cache invalidation failure during a direct I/O write.
3893 * If a page can not be invalidated, return 0 to fall back
3951 struct page *page;
3952 unsigned long offset; /* Offset into pagecache page */
3953 unsigned long bytes; /* Bytes to write to page */
3963 * Bring in the user page that we will copy from _first_.
3965 * same page as we're writing to, without it being marked
3979 &page, &fsdata);
3984 flush_dcache_page(page);
3986 copied = copy_page_from_iter_atomic(page, offset, bytes, i);
3987 flush_dcache_page(page);
3990 page, fsdata);
4065 * page-cache pages correctly).
4116 * This will also be called if the private_2 flag is set on a page,
4120 * this page (__GFP_IO), and whether the call may block
4143 * filemap_cachestat() - compute the page cache statistics of a mapping
4145 * @first_index: The starting page cache index.
4146 * @last_index: The final page index (inclusive).
4149 * This will query the page cache statistics of a mapping in the
4150 * page range of [first_index, last_index] (inclusive). The statistics
4193 /* page is evicted */
4213 /* page is in cache */
4234 * cachestat() returns the page cache statistics of a file in the
4239 * An evicted page is a page that is previously in the page cache
4240 * but has been evicted since. A page is recently evicted if its last
4254 * Because the status of a page can change after cachestat() checks it