Lines Matching refs:list
34 * mark->connector->lock protects the list of marks anchored inside an
37 * A list of notification marks relating to inode / mnt is contained in
39 * marks in the list and is also protected by fsnotify_mark_srcu. A mark gets
166 hlist_for_each_entry(mark, &conn->list, obj_list) {
180 * Calculate mask of events for a list of marks. The caller must make sure
183 * list.
315 if (hlist_empty(&conn->list)) {
350 * list. Mark can be already removed from the list by now and on its way to be
409 * lists, we can drop SRCU lock, and safely resume the list iteration
433 * Mark mark as detached, remove it from group list. Mark still stays in object
434 * list until its last reference is dropped. Note that we rely on mark being
435 * removed from group list before corresponding reference to it is dropped. In
549 INIT_HLIST_HEAD(&conn->list);
568 /* Someone else created list structure for us */
579 * hold reference to a mark on the list may directly lock connector->lock as
580 * they are sure list cannot go away under them.
604 * Add mark into proper place in given list of marks. These marks may be used
660 if (hlist_empty(&conn->list)) {
661 hlist_add_head_rcu(&mark->obj_list, &conn->list);
665 /* should mark be in the middle of the current list? */
666 hlist_for_each_entry(lmark, &conn->list, obj_list) {
759 * Given a list of marks, find the mark associated with given group. If found
772 hlist_for_each_entry(mark, &conn->list, obj_list) {
801 * to_free list so we have to use mark_mutex even when accessing that
802 * list. And freeing mark requires us to drop mark_mutex. So we can
803 * reliably free only the first mark in the list. That's why we first
804 * move marks to free to to_free list in one go and then free marks in
805 * to_free list one by one.
844 * list can get modified. However we are holding mark reference and
848 hlist_for_each_entry(mark, &conn->list, obj_list) {
858 * Detach list from object now so that we don't pin inode until all
894 /* exchange the list head */