Lines Matching refs:workers
62 * While associated (!DISASSOCIATED), all workers are bound to the
66 * While DISASSOCIATED, the cpu may be offline and all workers have
75 POOL_DISASSOCIATED = 1 << 2, /* cpu can't serve workers */
103 * Rescue workers are used only on emergencies and shared by
159 int nr_workers; /* L: total number of workers */
160 int nr_idle; /* L: currently idle workers */
162 struct list_head idle_list; /* X: list of idle workers */
164 struct timer_list mayday_timer; /* L: SOS timer for workers */
166 /* a workers is either on busy_hash or idle_list, or the manager */
168 /* L: hash of busy workers */
171 struct list_head workers; /* A: attached workers */
172 struct completion *detach_completion; /* all workers detached */
399 * for_each_pool_worker - iterate through all workers of a worker_pool
401 * @pool: worker_pool to iterate workers of
409 list_for_each_entry((worker), &(pool)->workers, node) \
774 * running workers.
776 * Note that, because unbound workers never contribute to nr_running, this
785 /* Can I start working? Called from busy but !running workers. */
791 /* Do I need to keep working? Called from currently running workers. */
804 /* Do we have too many workers and should some go away? */
885 * workers, also reach here, let's not access anything before
931 * to sleep. It's used by psi to identify aggregation workers during
1350 * list_add_tail() or we see zero nr_running to avoid workers lying
1872 list_add_tail(&worker->node, &pool->workers);
1896 if (list_empty(&pool->workers))
2168 * interaction with other workers on the same cpu, queueing and
2202 * multiple workers on a single cpu. Check whether anyone is
2239 * false for normal per-cpu workers since nr_running would always
2241 * pending work items for WORKER_NOT_RUNNING workers such as the
2362 * The worker thread function. All workers belong to a worker_pool -
2363 * either a per-cpu one or dynamic unbound one. These workers process all
2494 * pwq(s) queued. This can happen by non-rescuer workers consuming
3452 INIT_LIST_HEAD(&pool->workers);
3571 * Become the manager and destroy all workers. This prevents
3572 * @pool's workers from blocking on attach_mutex. We're the last
3587 if (!list_empty(&pool->workers))
4117 * workqueue with a cpumask spanning multiple nodes, the workers which were
4843 pr_cont(" hung=%lus workers=%d", hung, pool->nr_workers);
4932 * We've blocked all attach/detach operations. Make all workers
4933 * unbound and set DISASSOCIATED. Before this, all workers
4960 * are served by workers tied to the pool.
4976 * rebind_workers - rebind all workers of a pool to the associated CPU
4979 * @pool->cpu is coming online. Rebind all workers to the CPU.
4988 * Restore CPU affinity of all workers. As all idle workers should
4991 * of all workers first and then clear UNBOUND. As we're called
5008 * work. Kick all idle workers so that they migrate to the
5041 * restore_unbound_workers_cpumask - restore cpumask of unbound workers
5048 * online CPU before, cpus_allowed of all its workers should be restored.
5112 /* unbinding per-cpu workers should happen on the local CPU */
5403 * nice RW int : nice value of the workers
5404 * cpumask RW mask : bitmask of allowed CPUs for the workers
6046 * with the initial workers and enable future kworker creations.
6082 /* create the initial workers */