Lines Matching refs:scheduled
314 /* CPU where unbound work was last round robin scheduled from this CPU */
1054 * @work: start of series of works to be scheduled
1059 * be scheduled starts at @work and includes any consecutive work with
1063 * the last scheduled work. This allows move_linked_works() to be
1086 * multiple works to the scheduled queue, the next position
1125 * the release work item is scheduled on a per-cpu workqueue. To
1676 * @delay is zero and @dwork is idle, it will be scheduled for immediate
1708 * zero, @work is guaranteed to be scheduled immediately regardless of its
1839 INIT_LIST_HEAD(&worker->scheduled);
1987 WARN_ON(!list_empty(&worker->scheduled)) ||
2079 * sent to all rescuers with works scheduled on @pool to resolve
2208 move_linked_works(work, &collision->scheduled, NULL);
2328 * process_scheduled_works - process scheduled works
2331 * Process all scheduled works. Please note that the scheduled list
2341 while (!list_empty(&worker->scheduled)) {
2342 struct work_struct *work = list_first_entry(&worker->scheduled,
2404 * ->scheduled list can only be filled while a worker is
2408 WARN_ON_ONCE(!list_empty(&worker->scheduled));
2429 if (unlikely(!list_empty(&worker->scheduled)))
2432 move_linked_works(work, &worker->scheduled, NULL);
2478 struct list_head *scheduled = &rescuer->scheduled;
2524 WARN_ON_ONCE(!list_empty(scheduled));
2529 move_linked_works(work, scheduled, &n);
2534 if (!list_empty(scheduled)) {
2688 head = worker->scheduled.next;
2775 * flush_workqueue - ensure that any scheduled work has run to completion.
3107 * because we may get scheduled between @work's completion
3336 * 1 - function was scheduled for execution
4743 list_for_each_entry(work, &worker->scheduled, entry)
4912 * pool which make migrating pending and scheduled works very
5046 * CPUs. When a worker of such pool get scheduled, the scheduler resets
5968 * created and scheduled right before early initcalls.
6043 * and invoked as soon as kthreads can be created and scheduled.