Lines Matching defs:blocks

53  * How do we handle breaking sharing of data blocks?
61 * same data blocks.
104 * blocks will typically be shared by many different devices, so we're
691 * Returns the _complete_ blocks that this bio covers.
1081 * We've already unmapped this range of blocks, but before we
1082 * passdown we have to check that these blocks are now unused.
1156 * Only this thread allocates blocks, so we can be sure that the
1157 * newly unmapped blocks will not be allocated before the end of
1170 * Increment the unmapped blocks. This prevents a race between the
1171 * passdown io and reallocation of freed blocks.
1212 * unmapped blocks.
1460 ooms_reason = "Could not get free metadata blocks";
1462 ooms_reason = "No free metadata blocks";
1660 * We don't need to lock the data blocks, since there's no
1661 * passdown. We only lock data blocks for allocation and breaking sharing.
2094 * metadata blocks?
3197 * This ensures that the data blocks of any newly inserted mappings are
3202 * external snapshots and in the case of newly provisioned blocks, when block
3271 * <low water mark (blocks)>
3275 * skip_block_zeroing: skips the zeroing of newly-provisioned blocks.
3472 DMERR("%s: pool target (%llu blocks) too small: expected %llu",
3485 DMINFO("%s: growing the data device from %llu to %llu blocks",
3519 DMERR("%s: metadata device (%llu blocks) too small: expected %llu",
3532 DMINFO("%s: growing the metadata device from %llu to %llu blocks",
3552 * Retrieves the number of blocks of the data device from
4454 sector_t blocks;
4459 * We can't call dm_pool_get_data_dev_size() since that blocks. So
4465 blocks = pool->ti->len;
4466 (void) sector_div(blocks, pool->sectors_per_block);
4467 if (blocks)
4468 return fn(ti, tc->pool_dev, 0, pool->sectors_per_block * blocks, data);