Lines Matching defs:page

170 		buffer_io_error(bh, ", lost sync page write");
181 * But it's the page lock which protects the buffers. To get around this,
186 * may be quite high. This code could TryLock the page, and if that
223 /* we might be here because some of the buffers on this page are
260 buffer_io_error(bh, ", async page read");
267 * decide that the page is now completely done.
286 * If all of the buffers are uptodate then we can set the page
397 buffer_io_error(bh, ", lost async page write");
427 * If a page's buffers are under async readin (end_buffer_async_read
432 * the page. So the absence of BH_Async_Read tells end_buffer_async_read()
435 * The page comes unlocked when it has no locked buffer_async buffers
442 * page.
444 * PageLocked prevents anyone from starting writeback of a page which is
445 * under read I/O (PageWriteback is only ever set against a locked page).
698 * Add a page to the dirty page list.
703 * If the page has buffers, the uptodate buffers are set dirty, to preserve
704 * dirty-state coherency between the page and the buffers. It the page does
708 * The buffers are dirtied before the page is dirtied. There's a small race
709 * window in which a writepage caller may see the page cleanness but not the
710 * buffer dirtiness. That's fine. If this code were to set the page dirty
711 * before the buffers, a concurrent writepage caller could clear the page dirty
713 * page on the dirty page list.
716 * page's buffer list. Also use this to protect against clean buffers being
717 * added to the page after it was set dirty.
738 * Lock out page's memcg migration to keep PageDirty
739 * synchronized with per-memcg dirty page counters.
967 struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
970 return folio_alloc_buffers(page_folio(page), size, retry);
1033 * Create the page-cache page that contains the requested block.
1093 * Create buffers for the specified block device block's page. If
1094 * that page was dirty, the buffers are set dirty also.
1117 /* Create a page with the proper size buffers.. */
1154 * Whenever a page has any dirty buffers, the page's dirty bit is set, and
1155 * the page is tagged dirty in the page cache.
1158 * subsections of the page. If the page has buffers, the page dirty bit is
1161 * When a page is set dirty in its entirety, all its buffers are marked dirty
1162 * (if the page has buffers).
1164 * When a buffer is marked dirty, its page is dirtied, but the page's other
1168 * individually become uptodate. But their backing page remains not
1179 * its backing page dirty, then tag the page as dirty in the page cache
1235 * Decrement a buffer_head's reference count. If all buffers against a page
1236 * have zero reference count, are clean and unlocked, and if the page is clean
1237 * and unlocked then try_to_free_buffers() may strip the buffers from the page
1239 * a page but it ends up not being freed, and buffers may later be reattached).
1341 * attached page(i.e., try_to_free_buffers) so it could cause
1342 * failing page migration.
1412 /* __find_get_block_slow will mark the page accessed */
1462 * @gfp: page allocation flag
1465 * The page cache can be allocated from non-movable area
1466 * not to prevent page migration if you set gfp to zero.
1552 * Called when truncating a buffer on a page completely.
1673 void create_empty_buffers(struct page *page,
1676 folio_create_empty_buffers(page_folio(page), blocksize, b_state);
1795 * the page lock, whoever dirtied the buffers may decide to clean them
1800 * (wbc->sync_mode == WB_SYNC_NONE) then it will redirty a page which has a
2166 int __block_write_begin(struct page *page, loff_t pos, unsigned len,
2169 return __block_write_begin_int(page_folio(page), pos, len, get_block,
2218 struct page **pagep, get_block_t *get_block)
2221 struct page *page;
2224 page = grab_cache_page_write_begin(mapping, index);
2225 if (!page)
2228 status = __block_write_begin(page, pos, len, get_block);
2230 unlock_page(page);
2231 put_page(page);
2232 page = NULL;
2235 *pagep = page;
2242 struct page *page, void *fsdata)
2244 struct folio *folio = page_folio(page);
2276 struct page *page, void *fsdata)
2282 copied = block_write_end(file, mapping, pos, len, copied, page, fsdata);
2288 * But it's important to update i_size while still holding page lock:
2289 * page writeout could otherwise come in and zero beyond i_size.
2296 unlock_page(page);
2297 put_page(page);
2302 * Don't mark the inode dirty under page lock. First, it unnecessarily
2303 * makes the holding time of page lock longer. Second, it forces lock
2304 * ordering of page lock and transaction start for journaling
2468 struct page *page;
2476 err = aops->write_begin(NULL, mapping, size, 0, &page, &fsdata);
2480 err = aops->write_end(NULL, mapping, size, 0, 0, page, fsdata);
2494 struct page *page;
2513 &page, &fsdata);
2516 zero_user(page, zerofrom, len);
2518 page, fsdata);
2532 /* page covers the boundary, find the boundary offset */
2546 &page, &fsdata);
2549 zero_user(page, zerofrom, len);
2551 page, fsdata);
2567 struct page **pagep, void **fsdata,
2589 void block_commit_write(struct page *page, unsigned from, unsigned to)
2591 struct folio *folio = page_folio(page);
2598 * called from a page fault handler when a page is first dirtied. Hence we must
2599 * be careful to check for EOF conditions here. We set the page up correctly
2600 * for a written page which means we get ENOSPC checking when writing into
2605 * protect against truncate races as the page could now be beyond EOF. Because
2607 * page lock we can determine safely if the page is beyond EOF. If it is not
2608 * beyond EOF, then the page is guaranteed safe against truncation until we
2609 * unlock the page.
2617 struct folio *folio = page_folio(vmf->page);
2627 /* We overload EFAULT to mean page got truncated */
2728 int block_write_full_page(struct page *page, get_block_t *get_block,
2731 struct folio *folio = page_folio(page);
2749 * in multiples of the page size. For a file that is not a multiple of
2750 * the page size, the remaining memory is zeroed when mapped, and