Lines Matching refs:workqueues
22 * pools for workqueues which are not bound to any specific CPU - the
241 struct list_head list; /* PR: list of all workqueues */
272 * the workqueues list without grabbing wq_pool_mutex.
273 * This is used to dump all workqueues from sysrq.
302 static DEFINE_MUTEX(wq_pool_mutex); /* protects pools and workqueues list */
308 static LIST_HEAD(workqueues); /* PR: list of all workqueues */
1124 * pool->lock as this path is taken only for unbound workqueues and
1556 /* No point in doing this if NUMA isn't enabled for workqueues */
1603 * This current implementation is specific to unbound workqueues.
2278 * workqueues), so hiding them isn't a problem.
2365 * exception is work items which belong to workqueues with a rescuer which
2467 * workqueues which have works queued on the pool and let them process
3025 * For single threaded workqueues the deadlock happens when the work
3027 * workqueues the deadlock happens when the rescuer stalls, blocking
4047 /* only unbound workqueues can change attributes */
4120 * workqueues behave on CPU_DOWN. If a workqueue user wants strict
4338 * wq_pool_mutex protects global freeze state and workqueues list.
4339 * Grab it, adjust max_active and add the new @wq to workqueues
4349 list_add_tail_rcu(&wq->list, &workqueues);
4491 /* disallow meddling with max_active for ordered workqueues */
4550 * Note that both per-cpu and unbound workqueues may be associated with
4786 * all busy workqueues and pools.
4797 pr_info("Showing busy workqueues and worker pools:\n");
4799 list_for_each_entry_rcu(wq, &workqueues, list) {
5100 /* update NUMA affinity of unbound workqueues */
5101 list_for_each_entry(wq, &workqueues, list)
5118 /* update NUMA affinity of unbound workqueues */
5120 list_for_each_entry(wq, &workqueues, list)
5191 * freeze_workqueues_begin - begin freezing workqueues
5193 * Start freezing workqueues. After this function returns, all freezable
5194 * workqueues will queue new works to their inactive_works list instead of
5210 list_for_each_entry(wq, &workqueues, list) {
5221 * freeze_workqueues_busy - are freezable workqueues still busy?
5230 * %true if some freezable workqueues are still busy. %false if freezing
5243 list_for_each_entry(wq, &workqueues, list) {
5267 * thaw_workqueues - thaw workqueues
5269 * Thaw workqueues. Normal queueing is restored and all collected
5288 list_for_each_entry(wq, &workqueues, list) {
5309 list_for_each_entry(wq, &workqueues, list) {
5342 * The low-level workqueues cpumask is a global cpumask that limits
5343 * the affinity of all unbound workqueues. This function check the @cpumask
5344 * and apply it to all unbound workqueues and updates all pwqs of them.
5394 * /sys/bus/workqueue/devices/WQ_NAME. All visible workqueues have the
5400 * Unbound workqueues have the following extra attributes.
5698 * workqueues.
5965 * idr are up. It sets up all the data structures and system workqueues
5966 * and allows early boot code to create workqueues and queue/cancel work
6061 * Also, while iterating workqueues, create rescuers if requested.
6073 list_for_each_entry(wq, &workqueues, list) {