Lines Matching refs:from

56  * FIXME: remove all knowledge of the buffer layer from the core VM
215 * Delete a page from the page cache and free it. Caller has to make
243 * filemap_remove_folio - Remove folio from page cache.
267 * page_cache_delete_batch - delete several folios from page cache
272 * @fbatch from the mapping. The function expects @fbatch to be sorted
297 * pages locked so they are protected from being removed.
713 * Grab the wb_err from the mapping. If it matches what we have in the file,
944 * The folio might have been evicted from cache only
948 * data from the working set, only to cache data that will
1059 * and remove it from the wait queue.
1067 * WQ_FLAG_WOKEN bit, wake it up, and remove it from the wait queue.
1154 * Take a breather from holding the lock,
1157 * from wait queue
1219 /* How many times do we accept lock stealing from under a waiter? */
1328 * waiter from the wait-queues, but the folio waiters bit will remain
1516 * Context: May be called from interrupt or process context. May not be
1517 * called from NMI context.
1787 * 1. Load the folio from i_pages
1793 * B. Remove the page from i_pages
1811 * of a previously evicted folio, or a swap entry from shmem/tmpfs,
1828 * A shadow entry of a recently evicted page, or a swap entry from
1984 * entry from shmem/tmpfs or a DAX entry. Return it
2019 * Any shadow entries of evicted folios, or swap entries from
2058 * find_lock_entries() will return a batch of entries from @mapping.
2129 * page cache. If folios are added to or removed from the page cache
2605 * filemap_read - Read data from the page cache.
2610 * Copies data from the page cache. If the data is not currently present,
2769 * the new data. We invalidate clean cached page from the region we're
2771 * without clobbering -EIOCBQUEUED from ->direct_IO().
2845 * Splice subpages from a folio into a pipe.
2879 * filemap_splice_read - Splice data from a file's pagecache into a pipe
2880 * @in: The file to read from
2881 * @ppos: Pointer to the file position to read from
2886 * This function gets folios from a file's pagecache and splices them into the
3260 * We never return with VM_FAULT_RETRY and a bit from VM_FAULT_ERROR set.
3483 * start_page is gotten from start by folio_page(folio, start)
3737 /* Folio was truncated from mapping */
3766 * @mapping: The address_space to read from.
3797 * The most likely error from this function is EIO, but ENOMEM is
3886 generic_file_direct_write(struct kiocb *iocb, struct iov_iter *from)
3889 size_t write_len = iov_iter_count(from);
3903 written = mapping->a_ops->direct_IO(iocb, from);
3936 iov_iter_revert(from, write_len - iov_iter_count(from));
3954 size_t copied; /* Bytes copied from user */
3963 * Bring in the user page that we will copy from _first_.
3964 * Otherwise there's a nasty deadlock on copying from the
4025 * @from: iov_iter with data to write
4028 * file. It does all basic checks, removes SUID from the file, updates
4043 ssize_t __generic_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
4059 ret = generic_file_direct_write(iocb, from);
4067 if (ret < 0 || !iov_iter_count(from) || IS_DAX(inode))
4069 return direct_write_fallback(iocb, from, ret,
4070 generic_perform_write(iocb, from));
4073 return generic_perform_write(iocb, from);
4080 * @from: iov_iter with data to write
4090 ssize_t generic_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
4097 ret = generic_write_checks(iocb, from);
4099 ret = __generic_file_write_iter(iocb, from);
4173 * Instead, derive all information of interest from
4247 * we will query in the range from `off` to the end of the file.