Lines Matching defs:page
147 struct page **send_buf_pages;
1126 * Start with a small buffer (1 page). If later we end up needing more
1128 * then the page size, attempt to increase the buffer. Typically xattr
1634 * The last extent of a file may be too large due to page alignment.
5260 struct page *page;
5276 page = find_lock_page(sctx->cur_inode->i_mapping, index);
5277 if (!page) {
5282 page = find_or_create_page(sctx->cur_inode->i_mapping,
5284 if (!page) {
5290 if (PageReadahead(page))
5292 &sctx->ra, NULL, page_folio(page),
5295 if (!PageUptodate(page)) {
5296 btrfs_read_folio(NULL, page_folio(page));
5297 lock_page(page);
5298 if (!PageUptodate(page)) {
5299 unlock_page(page);
5302 page_offset(page), sctx->cur_ino,
5304 put_page(page);
5310 memcpy_from_page(sctx->send_buf + sctx->send_size, page,
5312 unlock_page(page);
5313 put_page(page);
5647 * We want to do I/O directly into the send buffer, so get the next page
5743 * It's very likely there are no pages from this inode in the page
5745 * the page cache to avoid trashing the page cache (adding pressure
5746 * to the page cache and forcing eviction of other data more useful
5749 * We decide if we should clean the page cache simply by checking
5777 * Always operate only on ranges that are a multiple of the page
5778 * size. This is not only to prevent zeroing parts of a page in
5780 * pages, as passing a range that is smaller than page size does
5781 * not evict the respective page (only zeroes part of its content).
5794 * up being read and placed in the page cache. So when truncating
5795 * the page cache we always start from the end offset of the
6289 * the page size (currently the same as sector size).