Lines Matching defs:page
1141 * Shamelessly stolen from the mm implementation of page reference checking,
3513 * This gets called when the page is unlocked, and we generally expect that to
3514 * happen when the page IO is completed and the page is now uptodate. This will
3516 * again. If the latter fails because the page was NOT uptodate, then we will
3539 * This controls whether a given IO request should be armed for async page
3543 * will either succeed because the page is now uptodate and unlocked, or it
3544 * will register a callback when the page is unlocked at IO completion. Through
3710 * desired page gets unlocked. We can also get a partial read
8853 struct page *page;
8858 page = virt_to_head_page(ptr);
8859 if (put_page_testzero(page))
8860 free_compound_page(page);
8983 * We check if the given compound head page has already been accounted, to
8985 * page, not just the constituent pages of a huge page.
8987 static bool headpage_already_acct(struct io_ring_ctx *ctx, struct page **pages,
8988 int nr_pages, struct page *hpage)
8992 /* check current page array */
9015 static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
9017 struct page **last_hpage)
9026 struct page *hpage;
9049 struct page **last_hpage)
9053 struct page **pages = NULL;
9071 pages = kvmalloc_array(nr_pages, sizeof(struct page *), GFP_KERNEL);
9183 struct page *last_hpage = NULL;
9238 struct page *last_hpage = NULL;
9943 struct page *page;
9958 page = virt_to_head_page(ptr);
9959 if (sz > page_size(page))