Lines Matching refs:inodes

428  * The following routine will lock n inodes in exclusive mode.  We assume the
429 * caller calls us with the inodes in i_ino order.
445 int inodes,
452 * Currently supports between 2 and 5 inodes with exclusive locking. We
454 * inodes depend on the type of locking and the limits placed by
458 ASSERT(ips && inodes >= 2 && inodes <= 5);
464 inodes <= XFS_MMAPLOCK_MAX_SUBCLASS + 1);
466 inodes <= XFS_ILOCK_MAX_SUBCLASS + 1);
476 for (; i < inodes; i++) {
483 * If try_lock is not set yet, make sure all locked inodes are
789 * If we are allocating quota inodes, we do not have a parent inode
831 * and on-disk inodes. If we don't catch reallocating the parent inode
1225 * Attach the dquot(s) to the inodes and modify them incore.
1315 * Attach the dquot(s) to the inodes and modify them incore.
1744 * completion, and can result in leaving dirty stale inodes hanging
1896 * are collectively known as unlinked inodes, though the filesystem itself
1897 * maintains links to these inodes so that on-disk metadata are consistent.
1899 * XFS implements a per-AG on-disk hash table of unlinked inodes. The AGI
2498 * mark it stale. We should only find clean inodes in this lookup that aren't
2534 * other inodes that we did not find in the list attached to the buffer
2595 * inodes that are in memory - they all must be marked stale and attached to
2618 * The allocation bitmap tells us which inodes of the chunk were
2633 * here to ensure dirty inodes attached to the buffer remain in
2636 * If we scan the in-memory inodes first, then buffer IO can
2638 * to mark all the active inodes on the buffer stale.
2650 * attach stale cached inodes on it. That means it will never be
2658 * Now we need to set all the cached clean inodes as XFS_ISTALE,
2659 * too. This requires lookups, and will skip inodes that we've
2677 * inodes in the AGI. We need to remove the inode from that list atomically with
2797 * new inodes, locking the AGF after the AGI. Similarly, freeing the inode
2934 * Enter all inodes for a rename transaction into a sorted array.
2944 struct xfs_inode **i_tab,/* out: sorted array of inodes */
2945 int *num_inodes) /* in/out: inodes in array */
2953 * i_tab contains a list of pointers to inodes. We initialize
3156 struct xfs_inode *inodes[__XFS_SORT_INODES];
3184 inodes, &num_inodes);
3197 * Attach the dquots to the inodes
3199 error = xfs_qm_vop_rename_dqattach(inodes);
3204 * Lock all the participating inodes. Depending upon whether
3207 * directory, we can lock from 2 to 4 inodes.
3209 xfs_lock_inodes(inodes, num_inodes, XFS_ILOCK_EXCL);
3212 * Join all the inodes to the transaction. From this point on,
3279 for (i = 0; i < num_inodes && inodes[i] != NULL; i++) {
3280 if (inodes[i] == wip ||
3281 (inodes[i] == target_ip &&
3286 agno = XFS_INO_TO_AGNO(mp, inodes[i]->i_ino);
3517 * Inode item log recovery for v2 inodes are dependent on the
3594 * locked. The function will walk across all the inodes on the cluster buffer it
3598 * buffer and release it. If no inodes are flushed, -EAGAIN will be returned and
3672 /* don't block waiting on a log force to unpin dirty inodes */
3751 /* Wait to break both inodes' layouts before we start locking. */
3789 * Lock two inodes so that userspace cannot initiate I/O via file syscalls or
3810 /* Unlock both inodes to allow IO and mmap activity. */