Home
last modified time | relevance | path

Searched refs:tasks (Results 1 - 5 of 5) sorted by relevance

/device/soc/rockchip/common/sdk_linux/include/linux/sched/
H A Dsignal.h602 #define tasklist_empty() list_empty(&init_task.tasks)
604 #define next_task(p) list_entry_rcu((p)->tasks.next, struct task_struct, tasks)
/device/soc/rockchip/common/sdk_linux/kernel/sched/
H A Dcore.c75 * Number of tasks to iterate in a single balance run.
89 * part of the period that we allow rt tasks to run in us.
158 * [ The astute reader will observe that it is possible for two tasks on one
166 * the CPU assignment of blocked tasks isn't required to be valid.
818 /* Deadline tasks, even if single, need the tick */ in sched_can_stop_tick()
824 * If there are more than one RR tasks, we need the tick to effect the in sched_can_stop_tick()
836 * If there's no RR tasks, but FIFO tasks, we can skip the tick, no in sched_can_stop_tick()
837 * forced preemption between FIFO tasks. in sched_can_stop_tick()
845 * If there are no DL,RR/FIFO tasks, ther in sched_can_stop_tick()
7342 detach_one_task_core(struct task_struct *p, struct rq *rq, struct list_head *tasks) detach_one_task_core() argument
7352 attach_tasks_core(struct list_head *tasks, struct rq *rq) attach_tasks_core() argument
7370 detach_one_task_core(struct task_struct *p, struct rq *rq, struct list_head *tasks) detach_one_task_core() argument
7375 attach_tasks_core(struct list_head *tasks, struct rq *rq) attach_tasks_core() argument
[all...]
H A Dfair.c63 * Targeted preemption latency for CPU-bound tasks:
92 * Minimal preemption granularity for CPU-bound tasks:
459 * both tasks until we find their ancestors who are siblings of common in find_matching_se()
720 * When there are too many tasks (sched_nr_latency) we have to stretch
798 * Tasks are initialized with full load to be seen as heavy tasks until in init_entity_runnable_average()
813 * With new tasks being created, their initial util_avgs are extrapolated
822 * To solve this problem, we also cap the util_avg of successive tasks to
863 * For !fair tasks do: in post_init_entity_util_avg()
1062 * Are we enqueueing a waiting task? (for current tasks in update_stats_enqueue()
1118 * calculated based on the tasks virtua
7880 struct list_head tasks; global() member
8145 struct list_head *tasks = &env->src_rq->cfs_tasks; detach_tasks() local
8325 struct list_head *tasks = &env->tasks; attach_tasks() local
[all...]
H A Dsched.h288 * tasks, but still be able to sleep. We need this on platforms that cannot
346 * To keep the bandwidth of -deadline tasks under control
499 * Controls whether tasks of this cgroup should be colocated with each
500 * other and tasks of other cgroups that have the same flag turned on.
654 * leaf cfs_rqs are those that hold tasks (lowest schedulable entity in
761 * an rb-tree, ordered by tasks' deadlines, with caching
776 * Utilization of the tasks "assigned" to this runqueue (including
777 * the tasks that are in runqueue and the tasks that executed on this
878 * than one runnable -deadline task (as it is below for RT tasks)
939 unsigned long tasks : BITS_PER_LONG - bits_per(SCHED_CAPACITY_SCALE); global() member
[all...]
/device/soc/rockchip/common/sdk_linux/include/linux/
H A Dsched.h378 * struct util_est - Estimation utilization of FAIR tasks
389 * The enqueued attribute has a slightly different meaning for tasks and cpus:
395 * Only for tasks we track a moving average of the past instantaneous
568 * demand for tasks.
786 * push tasks around a CPU where each wakeup moves to the next one.
879 struct list_head tasks; member
990 * 'ptraced' is the list of tasks this task is using ptrace() on.
1267 * pagefault context (and for tasks being destroyed), so it can be read
1700 * tasks can access tsk->flags in readonly mode for example

Completed in 23 milliseconds