Lines Matching defs:task

77  * We have two separate sets of flags: task->state

78 * is about runnability, while task->exit_state are
79 * about the task exiting. Confusing, but this way
118 #define task_is_traced(task) (((task)->state & __TASK_TRACED) != 0)
120 #define task_is_stopped(task) (((task)->state & __TASK_STOPPED) != 0)
122 #define task_is_stopped_or_traced(task) (((task)->state & (__TASK_STOPPED | __TASK_TRACED)) != 0)
198 * set_special_state() should be used for those states when the blocking task
379 * @enqueued: instantaneous estimated utilization of a task/cpu
381 * utilization of a task
384 * (EWMA) of a FAIR task's utilization. New samples are added to the moving
385 * average each time a task completes an activation. Sample's weight is chosen
387 * task's workload.
390 * - task: the task's util_avg at last task dequeue time
391 * - cfs_rq: the sum of util_est.enqueued for each RUNNABLE task on that CPU
392 * Thus, the util_est.enqueued of a task represents the contribution on the
393 * estimated utilization of the CPU where that task is currently enqueued.
397 * of an otherwise almost periodic task.
400 * updates. When a task is dequeued, its util_est should not be updated if its
403 * time. Since max value of util_est.enqueued for a task is 1024 (PELT util_avg
404 * for a task) it is safe to use MSB.
555 * 'mark_start' marks the beginning of an event (task waking up, task
556 * starting to execute, task being preempted) within a window
558 * 'sum' represents how runnable a task has been within current
563 * RAVG_HIST_SIZE windows. Windows where task was entirely sleeping are
570 * 'curr_window_cpu' represents task's contribution to cpu busy time on
573 * 'prev_window_cpu' represents task's contribution to cpu busy time on
632 * they are continuously updated during task execution. Note that
643 * task has to wait for a replenishment to be performed at the
650 * @dl_yielded tells if task gave up the CPU before consuming
653 * @dl_non_contending tells if the task is inactive while still
660 * @dl_overrun tells if the task asked to be informed about runtime
669 * Bandwidth enforcement timer. Each -deadline task has its
670 * own bandwidth to be enforced, thus we need one timer per task.
676 * at the "0-lag time". When a -deadline task blocks, it contributes
708 * The active bit is set whenever a task has got an "effective" value assigned,
710 * This allows to know a task is refcounted in the rq's bucket corresponding
713 * The user_defined bit is set whenever a task has got a task-specific clamp
714 * value requested from userspace, i.e. the system defaults apply to this task
716 * restrictive task-specific value has been requested, thus allowing to
717 * implement a "nice" semantic. For example, a task running with a 20%
768 /* Per task flags (PF_*), defined further below: */
784 * recent_used_cpu is initially set as the last CPU used by a task
785 * that wakes affine another task. Waker/wakee relationships can
809 * 'init_load_pct' represents the initial task load assigned to children
810 * of this task
948 /* task is frozen/stopped (used by the cgroup freezer) */
990 * 'ptraced' is the list of tasks this task is using ptrace() on.
993 * 'ptrace_entry' is this task's link on the p->parent->ptraced list.
1056 /* Objective and real subjective task credentials (COW): */
1059 /* Effective (overridable) subjective task credentials (COW): */
1135 /* PI waiters blocked on a rt_mutex held by this task: */
1271 * - task's runqueue locked, task not running
1294 * scan window were remote/local or failed to migrate. The task scan
1404 /* Coverage collection mode enabled for this task (0 if disabled): */
1413 /* KCOV descriptor wired with this task or NULL: */
1461 /* A live task holds one reference: */
1500 /* CPU-specific state of this task: */
1511 static inline struct pid *task_pid(struct task_struct *task)
1513 return task->thread_pid;
1517 * the helpers to get the task's different pids as they are seen
1527 pid_t __task_pid_nr_ns(struct task_struct *task, enum pid_type type, struct pid_namespace *ns);
1550 * pid_alive - check that a task structure is not stale
1554 * If pid_alive fails, then pointers within the task structure
1650 * is_global_init - check if a task structure is init. Since init
1654 * Check if a task structure is the first user space task the kernel created.
1656 * Return: 1 if the task structure is init. 0 otherwise.
1699 * Only the _current_ task can read/write to tsk->flags, but other
1703 * or during fork: the ptracer task is allowed to write to the
1831 * task_nice - return the nice value of a given task.
1832 * @p: the task in question.
1855 * is_idle_task - is the specified task an idle task?
1856 * @p: the task in question.
1858 * Return: 1 if @p is an idle task. 0 otherwise.
1872 struct task_struct task;
1887 static inline struct thread_info *task_thread_info(struct task_struct *task)
1889 return &task->thread_info;
1892 #define task_thread_info(task) ((struct thread_info *)(task)->stack)
1896 * find a task by one of its numerical ids
1899 * finds a task by its pid in the specified namespace
1901 * finds a task by its virtual pid
1910 * find a task by its virtual pid and get the task struct
1962 * Set thread flags in other task's structures.
2050 * task waiting?: (technically does not depend on CONFIG_PREEMPTION,
2101 * the native optimistic spin heuristic of testing if the lock owner task is