Lines Matching refs:list
127 * keep a list on each CPU, with each list protected by its own spinlock.
130 * Note that alterations to the list also require that the relevant flc_lock is
159 * In addition, it also protects the fl->fl_blocked_requests list, and the
206 locks_dump_ctx_list(struct list_head *list, char *list_type)
210 list_for_each_entry(fl, list, fl_list) {
233 locks_check_ctx_file_list(struct file *filp, struct list_head *list,
239 list_for_each_entry(fl, list, fl_list)
534 * old entry, then it used "priv" and inserted it into the fasync list.
624 * is done while holding the flc_lock, and new insertions into the list
656 /* Remove waiter from blocker's block list.
657 * When blocker ends up pointing to itself then the list is empty.
709 * was recently added to that list it must have been in a locked region
717 * no new locks can be inserted into its fl_blocked_requests list, and
718 * can avoid doing anything further if the list is empty.
740 /* Insert waiter into blocker's block list.
741 * We use a circular list so that processes can be easily woken up in
746 * fl_blocked_requests list itself is protected by the blocked_lock_lock,
749 * fl_blocked_requests list is empty.
751 * Rather than just adding to the list, we check for conflicts with any existing
800 * Avoid taking global lock if list is empty. This is safe since new
801 * blocked requests are only added to the list under the flc_lock, and
803 * fl_blocked_requests list does not require the flc_lock, so we must
1132 * blocker's list of waiters and the global blocked_hash.
1156 * locks list must be done while holding the same lock!
1199 /* If the next lock in the list has entirely bigger
1236 /* If the next lock in the list has a higher end
2773 /* Next member in the linked list could be itself */
2795 /* View this crossed linked list as a binary tree, the first member of fl_blocked_requests