Lines Matching defs:extents
99 * Cleanup all submitted ordered extents in specified range to handle errors
241 * We align size to sectorsize for inline extents just for simplicity
373 struct list_head extents;
402 list_add_tail(&async_extent->list, &cow->extents);
458 * we create compressed extents in two phases. The first
763 * queued. We walk all the async extents created by compress_file_range
779 while (!list_empty(&async_chunk->extents)) {
780 async_extent = list_entry(async_chunk->extents.next,
967 * allocate extents on disk for the range, and create ordered data structs
968 * in ram to track those extents.
1038 * Relocation relies on the relocated extents to have exactly the same
1039 * size as the original extents. Normally writeback for relocation data
1040 * extents follows a NOCOW path because relocation preallocates the
1041 * extents. However, due to an operation such as scrub turning a block
1044 * not be split into smaller extents, otherwise relocation breaks and
1300 INIT_LIST_HEAD(&async_chunk[i].extents);
1412 * reason. Space caches and relocated data extents always get a prealloc
1442 * of the extents that exist in the file, and COWs the file as required.
1531 * more extents for this inode
1582 /* Skip compressed/encrypted/encoded extents */
1655 /* Skip extents outside of our requested range */
1875 * Handle merged delayed allocation extents so we can keep track of new extents
1876 * that are just merged onto old extents, such as when we are doing sequential
1903 * We have to add up either side to figure out how many extents were
1905 * extents we accounted for is <= the amount we need for the new range
1911 * need 2 outstanding extents, on one side we have 1 and the other side
1916 * Each range on their own accounts for 2 extents, but merged together
1917 * they are only 3 extents worth of accounting, so we need to drop in
2304 * There can't be any extents following eof in this case so just
2466 * to fix it up. The async helper will wait for ordered extents, set
3261 * it goes inode, inode backrefs, xattrs, extents,
3362 * idea about which extents were modified before we were evicted from
4286 * with a 16K leaf size and 128MB extents, you can actually queue
4422 * Inline extents are special, we just treat
4750 * This function puts in dummy file extents for the area we're creating a hole
4752 * these file extents so that btrfs_get_extent will return a EXTENT_MAP_HOLE for
4929 * adjusted disk_i_size down as we removed extents, so
6581 * there may be more extents which overlap the given range after the returned
7004 * This function will flush ordered extents in the range to ensure proper
7012 * NOTE: This only checks the file extents, caller is responsible to wait for
7013 * any ordered extents.
7173 * extents in this range.
7464 * Ok for INLINE and COMPRESSED extents we need to fallback on buffered
7486 * previous non-compressed extents and then when we fallback to
7523 * We trim the extents (and move the addr) even though iomap code does
7736 * Our bio might span multiple ordered extents. In this case
7829 * or ordered extents whether or not we submit any bios.
8259 * to account for any ordered extents now
8407 * extents. Drop our locks and wait for them to finish
8556 * write the extents that changed, which is a problem if we need to
8558 * all of the extents in the inode to the sync log so we're completely
10203 * run concurrently while we are mapping the swap extents, and
10205 * file is active and moving the extents. Note that this also prevents
10229 * Snapshots can create extents which require COW even if NODATACOW is
10231 * before walking the extents because we don't want a concurrent
10232 * snapshot to run after we've already checked the extents.
10457 * use bmap to make a mapping of extents in the file. They assume
10458 * these extents won't change over the life of the file and they