Lines Matching refs:inodes
436 * The following routine will lock n inodes in exclusive mode. We assume the
437 * caller calls us with the inodes in i_ino order.
453 int inodes,
463 * Currently supports between 2 and 5 inodes with exclusive locking. We
465 * inodes depend on the type of locking and the limits placed by
469 ASSERT(ips && inodes >= 2 && inodes <= 5);
475 inodes <= XFS_MMAPLOCK_MAX_SUBCLASS + 1);
477 inodes <= XFS_ILOCK_MAX_SUBCLASS + 1);
487 for (; i < inodes; i++) {
494 * If try_lock is not set yet, make sure all locked inodes are
802 * and on-disk inodes. If we don't catch reallocating the parent inode
1070 * Attach the dquot(s) to the inodes and modify them incore.
1162 * Attach the dquot(s) to the inodes and modify them incore.
1591 * completion, and can result in leaving dirty stale inodes hanging
1659 /* If the log isn't running, push inodes straight to reclaim. */
1663 /* Metadata inodes require explicit resource cleanup. */
1723 /* Metadata inodes require explicit resource cleanup. */
1759 * unlinked inodes.
1809 * are collectively known as unlinked inodes, though the filesystem itself
1810 * maintains links to these inodes so that on-disk metadata are consistent.
1812 * XFS implements a per-AG on-disk hash table of unlinked inodes. The AGI
1823 * misses on lookups. Instead, use the fact that inodes on the unlinked list
1825 * have an existence guarantee for inodes on the unlinked list.
1828 * to resolve aginos to xfs inodes. This means we only need 8 bytes per inode
1838 * only unlinked, referenced inodes can be on the unlinked inode list. If we
2169 * mark it stale. We should only find clean inodes in this lookup that aren't
2204 * other inodes that we did not find in the list attached to the buffer
2265 * inodes that are in memory - they all must be marked stale and attached to
2289 * The allocation bitmap tells us which inodes of the chunk were
2304 * here to ensure dirty inodes attached to the buffer remain in
2307 * If we scan the in-memory inodes first, then buffer IO can
2309 * to mark all the active inodes on the buffer stale.
2321 * attach stale cached inodes on it. That means it will never be
2329 * Now we need to set all the cached clean inodes as XFS_ISTALE,
2330 * too. This requires lookups, and will skip inodes that we've
2348 * inodes in the AGI. We need to remove the inode from that list atomically with
2474 * new inodes, locking the AGF after the AGI. Similarly, freeing the inode
2563 * directory to eliminate back-references to inodes that may
2618 * Enter all inodes for a rename transaction into a sorted array.
2628 struct xfs_inode **i_tab,/* out: sorted array of inodes */
2629 int *num_inodes) /* in/out: inodes in array */
2637 * i_tab contains a list of pointers to inodes. We initialize
2854 struct xfs_inode *inodes[__XFS_SORT_INODES];
2884 inodes, &num_inodes);
2900 * Attach the dquots to the inodes
2902 error = xfs_qm_vop_rename_dqattach(inodes);
2907 * Lock all the participating inodes. Depending upon whether
2910 * directory, we can lock from 2 to 5 inodes.
2912 xfs_lock_inodes(inodes, num_inodes, XFS_ILOCK_EXCL);
2915 * Join all the inodes to the transaction. From this point on,
3007 for (i = 0; i < num_inodes && inodes[i] != NULL; i++) {
3008 if (inodes[i] == wip ||
3009 (inodes[i] == target_ip &&
3015 XFS_INO_TO_AGNO(mp, inodes[i]->i_ino));
3255 * Inode item log recovery for v2 inodes are dependent on the flushiter
3334 * locked. The function will walk across all the inodes on the cluster buffer it
3338 * buffer and release it. If no inodes are flushed, -EAGAIN will be returned and
3412 /* don't block waiting on a log force to unpin dirty inodes */
3501 /* Wait to break both inodes' layouts before we start locking. */
3583 * Lock two inodes so that userspace cannot initiate I/O via file syscalls or
3612 /* Unlock both inodes to allow IO and mmap activity. */