162306a36Sopenharmony_ci================================================= 262306a36Sopenharmony_ciA Tour Through TREE_RCU's Expedited Grace Periods 362306a36Sopenharmony_ci================================================= 462306a36Sopenharmony_ci 562306a36Sopenharmony_ciIntroduction 662306a36Sopenharmony_ci============ 762306a36Sopenharmony_ci 862306a36Sopenharmony_ciThis document describes RCU's expedited grace periods. 962306a36Sopenharmony_ciUnlike RCU's normal grace periods, which accept long latencies to attain 1062306a36Sopenharmony_cihigh efficiency and minimal disturbance, expedited grace periods accept 1162306a36Sopenharmony_cilower efficiency and significant disturbance to attain shorter latencies. 1262306a36Sopenharmony_ci 1362306a36Sopenharmony_ciThere are two flavors of RCU (RCU-preempt and RCU-sched), with an earlier 1462306a36Sopenharmony_cithird RCU-bh flavor having been implemented in terms of the other two. 1562306a36Sopenharmony_ciEach of the two implementations is covered in its own section. 1662306a36Sopenharmony_ci 1762306a36Sopenharmony_ciExpedited Grace Period Design 1862306a36Sopenharmony_ci============================= 1962306a36Sopenharmony_ci 2062306a36Sopenharmony_ciThe expedited RCU grace periods cannot be accused of being subtle, 2162306a36Sopenharmony_cigiven that they for all intents and purposes hammer every CPU that 2262306a36Sopenharmony_cihas not yet provided a quiescent state for the current expedited 2362306a36Sopenharmony_cigrace period. 2462306a36Sopenharmony_ciThe one saving grace is that the hammer has grown a bit smaller 2562306a36Sopenharmony_ciover time: The old call to ``try_stop_cpus()`` has been 2662306a36Sopenharmony_cireplaced with a set of calls to ``smp_call_function_single()``, 2762306a36Sopenharmony_cieach of which results in an IPI to the target CPU. 2862306a36Sopenharmony_ciThe corresponding handler function checks the CPU's state, motivating 2962306a36Sopenharmony_cia faster quiescent state where possible, and triggering a report 3062306a36Sopenharmony_ciof that quiescent state. 3162306a36Sopenharmony_ciAs always for RCU, once everything has spent some time in a quiescent 3262306a36Sopenharmony_cistate, the expedited grace period has completed. 3362306a36Sopenharmony_ci 3462306a36Sopenharmony_ciThe details of the ``smp_call_function_single()`` handler's 3562306a36Sopenharmony_cioperation depend on the RCU flavor, as described in the following 3662306a36Sopenharmony_cisections. 3762306a36Sopenharmony_ci 3862306a36Sopenharmony_ciRCU-preempt Expedited Grace Periods 3962306a36Sopenharmony_ci=================================== 4062306a36Sopenharmony_ci 4162306a36Sopenharmony_ci``CONFIG_PREEMPTION=y`` kernels implement RCU-preempt. 4262306a36Sopenharmony_ciThe overall flow of the handling of a given CPU by an RCU-preempt 4362306a36Sopenharmony_ciexpedited grace period is shown in the following diagram: 4462306a36Sopenharmony_ci 4562306a36Sopenharmony_ci.. kernel-figure:: ExpRCUFlow.svg 4662306a36Sopenharmony_ci 4762306a36Sopenharmony_ciThe solid arrows denote direct action, for example, a function call. 4862306a36Sopenharmony_ciThe dotted arrows denote indirect action, for example, an IPI 4962306a36Sopenharmony_cior a state that is reached after some time. 5062306a36Sopenharmony_ci 5162306a36Sopenharmony_ciIf a given CPU is offline or idle, ``synchronize_rcu_expedited()`` 5262306a36Sopenharmony_ciwill ignore it because idle and offline CPUs are already residing 5362306a36Sopenharmony_ciin quiescent states. 5462306a36Sopenharmony_ciOtherwise, the expedited grace period will use 5562306a36Sopenharmony_ci``smp_call_function_single()`` to send the CPU an IPI, which 5662306a36Sopenharmony_ciis handled by ``rcu_exp_handler()``. 5762306a36Sopenharmony_ci 5862306a36Sopenharmony_ciHowever, because this is preemptible RCU, ``rcu_exp_handler()`` 5962306a36Sopenharmony_cican check to see if the CPU is currently running in an RCU read-side 6062306a36Sopenharmony_cicritical section. 6162306a36Sopenharmony_ciIf not, the handler can immediately report a quiescent state. 6262306a36Sopenharmony_ciOtherwise, it sets flags so that the outermost ``rcu_read_unlock()`` 6362306a36Sopenharmony_ciinvocation will provide the needed quiescent-state report. 6462306a36Sopenharmony_ciThis flag-setting avoids the previous forced preemption of all 6562306a36Sopenharmony_ciCPUs that might have RCU read-side critical sections. 6662306a36Sopenharmony_ciIn addition, this flag-setting is done so as to avoid increasing 6762306a36Sopenharmony_cithe overhead of the common-case fastpath through the scheduler. 6862306a36Sopenharmony_ci 6962306a36Sopenharmony_ciAgain because this is preemptible RCU, an RCU read-side critical section 7062306a36Sopenharmony_cican be preempted. 7162306a36Sopenharmony_ciWhen that happens, RCU will enqueue the task, which will the continue to 7262306a36Sopenharmony_ciblock the current expedited grace period until it resumes and finds its 7362306a36Sopenharmony_cioutermost ``rcu_read_unlock()``. 7462306a36Sopenharmony_ciThe CPU will report a quiescent state just after enqueuing the task because 7562306a36Sopenharmony_cithe CPU is no longer blocking the grace period. 7662306a36Sopenharmony_ciIt is instead the preempted task doing the blocking. 7762306a36Sopenharmony_ciThe list of blocked tasks is managed by ``rcu_preempt_ctxt_queue()``, 7862306a36Sopenharmony_ciwhich is called from ``rcu_preempt_note_context_switch()``, which 7962306a36Sopenharmony_ciin turn is called from ``rcu_note_context_switch()``, which in 8062306a36Sopenharmony_citurn is called from the scheduler. 8162306a36Sopenharmony_ci 8262306a36Sopenharmony_ci 8362306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 8462306a36Sopenharmony_ci| **Quick Quiz**: | 8562306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 8662306a36Sopenharmony_ci| Why not just have the expedited grace period check the state of all | 8762306a36Sopenharmony_ci| the CPUs? After all, that would avoid all those real-time-unfriendly | 8862306a36Sopenharmony_ci| IPIs. | 8962306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 9062306a36Sopenharmony_ci| **Answer**: | 9162306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 9262306a36Sopenharmony_ci| Because we want the RCU read-side critical sections to run fast, | 9362306a36Sopenharmony_ci| which means no memory barriers. Therefore, it is not possible to | 9462306a36Sopenharmony_ci| safely check the state from some other CPU. And even if it was | 9562306a36Sopenharmony_ci| possible to safely check the state, it would still be necessary to | 9662306a36Sopenharmony_ci| IPI the CPU to safely interact with the upcoming | 9762306a36Sopenharmony_ci| ``rcu_read_unlock()`` invocation, which means that the remote state | 9862306a36Sopenharmony_ci| testing would not help the worst-case latency that real-time | 9962306a36Sopenharmony_ci| applications care about. | 10062306a36Sopenharmony_ci| | 10162306a36Sopenharmony_ci| One way to prevent your real-time application from getting hit with | 10262306a36Sopenharmony_ci| these IPIs is to build your kernel with ``CONFIG_NO_HZ_FULL=y``. RCU | 10362306a36Sopenharmony_ci| would then perceive the CPU running your application as being idle, | 10462306a36Sopenharmony_ci| and it would be able to safely detect that state without needing to | 10562306a36Sopenharmony_ci| IPI the CPU. | 10662306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 10762306a36Sopenharmony_ci 10862306a36Sopenharmony_ciPlease note that this is just the overall flow: Additional complications 10962306a36Sopenharmony_cican arise due to races with CPUs going idle or offline, among other 11062306a36Sopenharmony_cithings. 11162306a36Sopenharmony_ci 11262306a36Sopenharmony_ciRCU-sched Expedited Grace Periods 11362306a36Sopenharmony_ci--------------------------------- 11462306a36Sopenharmony_ci 11562306a36Sopenharmony_ci``CONFIG_PREEMPTION=n`` kernels implement RCU-sched. The overall flow of 11662306a36Sopenharmony_cithe handling of a given CPU by an RCU-sched expedited grace period is 11762306a36Sopenharmony_cishown in the following diagram: 11862306a36Sopenharmony_ci 11962306a36Sopenharmony_ci.. kernel-figure:: ExpSchedFlow.svg 12062306a36Sopenharmony_ci 12162306a36Sopenharmony_ciAs with RCU-preempt, RCU-sched's ``synchronize_rcu_expedited()`` ignores 12262306a36Sopenharmony_cioffline and idle CPUs, again because they are in remotely detectable 12362306a36Sopenharmony_ciquiescent states. However, because the ``rcu_read_lock_sched()`` and 12462306a36Sopenharmony_ci``rcu_read_unlock_sched()`` leave no trace of their invocation, in 12562306a36Sopenharmony_cigeneral it is not possible to tell whether or not the current CPU is in 12662306a36Sopenharmony_cian RCU read-side critical section. The best that RCU-sched's 12762306a36Sopenharmony_ci``rcu_exp_handler()`` can do is to check for idle, on the off-chance 12862306a36Sopenharmony_cithat the CPU went idle while the IPI was in flight. If the CPU is idle, 12962306a36Sopenharmony_cithen ``rcu_exp_handler()`` reports the quiescent state. 13062306a36Sopenharmony_ci 13162306a36Sopenharmony_ciOtherwise, the handler forces a future context switch by setting the 13262306a36Sopenharmony_ciNEED_RESCHED flag of the current task's thread flag and the CPU preempt 13362306a36Sopenharmony_cicounter. At the time of the context switch, the CPU reports the 13462306a36Sopenharmony_ciquiescent state. Should the CPU go offline first, it will report the 13562306a36Sopenharmony_ciquiescent state at that time. 13662306a36Sopenharmony_ci 13762306a36Sopenharmony_ciExpedited Grace Period and CPU Hotplug 13862306a36Sopenharmony_ci-------------------------------------- 13962306a36Sopenharmony_ci 14062306a36Sopenharmony_ciThe expedited nature of expedited grace periods require a much tighter 14162306a36Sopenharmony_ciinteraction with CPU hotplug operations than is required for normal 14262306a36Sopenharmony_cigrace periods. In addition, attempting to IPI offline CPUs will result 14362306a36Sopenharmony_ciin splats, but failing to IPI online CPUs can result in too-short grace 14462306a36Sopenharmony_ciperiods. Neither option is acceptable in production kernels. 14562306a36Sopenharmony_ci 14662306a36Sopenharmony_ciThe interaction between expedited grace periods and CPU hotplug 14762306a36Sopenharmony_cioperations is carried out at several levels: 14862306a36Sopenharmony_ci 14962306a36Sopenharmony_ci#. The number of CPUs that have ever been online is tracked by the 15062306a36Sopenharmony_ci ``rcu_state`` structure's ``->ncpus`` field. The ``rcu_state`` 15162306a36Sopenharmony_ci structure's ``->ncpus_snap`` field tracks the number of CPUs that 15262306a36Sopenharmony_ci have ever been online at the beginning of an RCU expedited grace 15362306a36Sopenharmony_ci period. Note that this number never decreases, at least in the 15462306a36Sopenharmony_ci absence of a time machine. 15562306a36Sopenharmony_ci#. The identities of the CPUs that have ever been online is tracked by 15662306a36Sopenharmony_ci the ``rcu_node`` structure's ``->expmaskinitnext`` field. The 15762306a36Sopenharmony_ci ``rcu_node`` structure's ``->expmaskinit`` field tracks the 15862306a36Sopenharmony_ci identities of the CPUs that were online at least once at the 15962306a36Sopenharmony_ci beginning of the most recent RCU expedited grace period. The 16062306a36Sopenharmony_ci ``rcu_state`` structure's ``->ncpus`` and ``->ncpus_snap`` fields are 16162306a36Sopenharmony_ci used to detect when new CPUs have come online for the first time, 16262306a36Sopenharmony_ci that is, when the ``rcu_node`` structure's ``->expmaskinitnext`` 16362306a36Sopenharmony_ci field has changed since the beginning of the last RCU expedited grace 16462306a36Sopenharmony_ci period, which triggers an update of each ``rcu_node`` structure's 16562306a36Sopenharmony_ci ``->expmaskinit`` field from its ``->expmaskinitnext`` field. 16662306a36Sopenharmony_ci#. Each ``rcu_node`` structure's ``->expmaskinit`` field is used to 16762306a36Sopenharmony_ci initialize that structure's ``->expmask`` at the beginning of each 16862306a36Sopenharmony_ci RCU expedited grace period. This means that only those CPUs that have 16962306a36Sopenharmony_ci been online at least once will be considered for a given grace 17062306a36Sopenharmony_ci period. 17162306a36Sopenharmony_ci#. Any CPU that goes offline will clear its bit in its leaf ``rcu_node`` 17262306a36Sopenharmony_ci structure's ``->qsmaskinitnext`` field, so any CPU with that bit 17362306a36Sopenharmony_ci clear can safely be ignored. However, it is possible for a CPU coming 17462306a36Sopenharmony_ci online or going offline to have this bit set for some time while 17562306a36Sopenharmony_ci ``cpu_online`` returns ``false``. 17662306a36Sopenharmony_ci#. For each non-idle CPU that RCU believes is currently online, the 17762306a36Sopenharmony_ci grace period invokes ``smp_call_function_single()``. If this 17862306a36Sopenharmony_ci succeeds, the CPU was fully online. Failure indicates that the CPU is 17962306a36Sopenharmony_ci in the process of coming online or going offline, in which case it is 18062306a36Sopenharmony_ci necessary to wait for a short time period and try again. The purpose 18162306a36Sopenharmony_ci of this wait (or series of waits, as the case may be) is to permit a 18262306a36Sopenharmony_ci concurrent CPU-hotplug operation to complete. 18362306a36Sopenharmony_ci#. In the case of RCU-sched, one of the last acts of an outgoing CPU is 18462306a36Sopenharmony_ci to invoke ``rcu_report_dead()``, which reports a quiescent state for 18562306a36Sopenharmony_ci that CPU. However, this is likely paranoia-induced redundancy. 18662306a36Sopenharmony_ci 18762306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 18862306a36Sopenharmony_ci| **Quick Quiz**: | 18962306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 19062306a36Sopenharmony_ci| Why all the dancing around with multiple counters and masks tracking | 19162306a36Sopenharmony_ci| CPUs that were once online? Why not just have a single set of masks | 19262306a36Sopenharmony_ci| tracking the currently online CPUs and be done with it? | 19362306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 19462306a36Sopenharmony_ci| **Answer**: | 19562306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 19662306a36Sopenharmony_ci| Maintaining single set of masks tracking the online CPUs *sounds* | 19762306a36Sopenharmony_ci| easier, at least until you try working out all the race conditions | 19862306a36Sopenharmony_ci| between grace-period initialization and CPU-hotplug operations. For | 19962306a36Sopenharmony_ci| example, suppose initialization is progressing down the tree while a | 20062306a36Sopenharmony_ci| CPU-offline operation is progressing up the tree. This situation can | 20162306a36Sopenharmony_ci| result in bits set at the top of the tree that have no counterparts | 20262306a36Sopenharmony_ci| at the bottom of the tree. Those bits will never be cleared, which | 20362306a36Sopenharmony_ci| will result in grace-period hangs. In short, that way lies madness, | 20462306a36Sopenharmony_ci| to say nothing of a great many bugs, hangs, and deadlocks. | 20562306a36Sopenharmony_ci| In contrast, the current multi-mask multi-counter scheme ensures that | 20662306a36Sopenharmony_ci| grace-period initialization will always see consistent masks up and | 20762306a36Sopenharmony_ci| down the tree, which brings significant simplifications over the | 20862306a36Sopenharmony_ci| single-mask method. | 20962306a36Sopenharmony_ci| | 21062306a36Sopenharmony_ci| This is an instance of `deferring work in order to avoid | 21162306a36Sopenharmony_ci| synchronization <http://www.cs.columbia.edu/~library/TR-repository/re | 21262306a36Sopenharmony_ci| ports/reports-1992/cucs-039-92.ps.gz>`__. | 21362306a36Sopenharmony_ci| Lazily recording CPU-hotplug events at the beginning of the next | 21462306a36Sopenharmony_ci| grace period greatly simplifies maintenance of the CPU-tracking | 21562306a36Sopenharmony_ci| bitmasks in the ``rcu_node`` tree. | 21662306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 21762306a36Sopenharmony_ci 21862306a36Sopenharmony_ciExpedited Grace Period Refinements 21962306a36Sopenharmony_ci---------------------------------- 22062306a36Sopenharmony_ci 22162306a36Sopenharmony_ciIdle-CPU Checks 22262306a36Sopenharmony_ci~~~~~~~~~~~~~~~ 22362306a36Sopenharmony_ci 22462306a36Sopenharmony_ciEach expedited grace period checks for idle CPUs when initially forming 22562306a36Sopenharmony_cithe mask of CPUs to be IPIed and again just before IPIing a CPU (both 22662306a36Sopenharmony_cichecks are carried out by ``sync_rcu_exp_select_cpus()``). If the CPU is 22762306a36Sopenharmony_ciidle at any time between those two times, the CPU will not be IPIed. 22862306a36Sopenharmony_ciInstead, the task pushing the grace period forward will include the idle 22962306a36Sopenharmony_ciCPUs in the mask passed to ``rcu_report_exp_cpu_mult()``. 23062306a36Sopenharmony_ci 23162306a36Sopenharmony_ciFor RCU-sched, there is an additional check: If the IPI has interrupted 23262306a36Sopenharmony_cithe idle loop, then ``rcu_exp_handler()`` invokes 23362306a36Sopenharmony_ci``rcu_report_exp_rdp()`` to report the corresponding quiescent state. 23462306a36Sopenharmony_ci 23562306a36Sopenharmony_ciFor RCU-preempt, there is no specific check for idle in the IPI handler 23662306a36Sopenharmony_ci(``rcu_exp_handler()``), but because RCU read-side critical sections are 23762306a36Sopenharmony_cinot permitted within the idle loop, if ``rcu_exp_handler()`` sees that 23862306a36Sopenharmony_cithe CPU is within RCU read-side critical section, the CPU cannot 23962306a36Sopenharmony_cipossibly be idle. Otherwise, ``rcu_exp_handler()`` invokes 24062306a36Sopenharmony_ci``rcu_report_exp_rdp()`` to report the corresponding quiescent state, 24162306a36Sopenharmony_ciregardless of whether or not that quiescent state was due to the CPU 24262306a36Sopenharmony_cibeing idle. 24362306a36Sopenharmony_ci 24462306a36Sopenharmony_ciIn summary, RCU expedited grace periods check for idle when building the 24562306a36Sopenharmony_cibitmask of CPUs that must be IPIed, just before sending each IPI, and 24662306a36Sopenharmony_ci(either explicitly or implicitly) within the IPI handler. 24762306a36Sopenharmony_ci 24862306a36Sopenharmony_ciBatching via Sequence Counter 24962306a36Sopenharmony_ci~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 25062306a36Sopenharmony_ci 25162306a36Sopenharmony_ciIf each grace-period request was carried out separately, expedited grace 25262306a36Sopenharmony_ciperiods would have abysmal scalability and problematic high-load 25362306a36Sopenharmony_cicharacteristics. Because each grace-period operation can serve an 25462306a36Sopenharmony_ciunlimited number of updates, it is important to *batch* requests, so 25562306a36Sopenharmony_cithat a single expedited grace-period operation will cover all requests 25662306a36Sopenharmony_ciin the corresponding batch. 25762306a36Sopenharmony_ci 25862306a36Sopenharmony_ciThis batching is controlled by a sequence counter named 25962306a36Sopenharmony_ci``->expedited_sequence`` in the ``rcu_state`` structure. This counter 26062306a36Sopenharmony_cihas an odd value when there is an expedited grace period in progress and 26162306a36Sopenharmony_cian even value otherwise, so that dividing the counter value by two gives 26262306a36Sopenharmony_cithe number of completed grace periods. During any given update request, 26362306a36Sopenharmony_cithe counter must transition from even to odd and then back to even, thus 26462306a36Sopenharmony_ciindicating that a grace period has elapsed. Therefore, if the initial 26562306a36Sopenharmony_civalue of the counter is ``s``, the updater must wait until the counter 26662306a36Sopenharmony_cireaches at least the value ``(s+3)&~0x1``. This counter is managed by 26762306a36Sopenharmony_cithe following access functions: 26862306a36Sopenharmony_ci 26962306a36Sopenharmony_ci#. ``rcu_exp_gp_seq_start()``, which marks the start of an expedited 27062306a36Sopenharmony_ci grace period. 27162306a36Sopenharmony_ci#. ``rcu_exp_gp_seq_end()``, which marks the end of an expedited grace 27262306a36Sopenharmony_ci period. 27362306a36Sopenharmony_ci#. ``rcu_exp_gp_seq_snap()``, which obtains a snapshot of the counter. 27462306a36Sopenharmony_ci#. ``rcu_exp_gp_seq_done()``, which returns ``true`` if a full expedited 27562306a36Sopenharmony_ci grace period has elapsed since the corresponding call to 27662306a36Sopenharmony_ci ``rcu_exp_gp_seq_snap()``. 27762306a36Sopenharmony_ci 27862306a36Sopenharmony_ciAgain, only one request in a given batch need actually carry out a 27962306a36Sopenharmony_cigrace-period operation, which means there must be an efficient way to 28062306a36Sopenharmony_ciidentify which of many concurrent requests will initiate the grace 28162306a36Sopenharmony_ciperiod, and that there be an efficient way for the remaining requests to 28262306a36Sopenharmony_ciwait for that grace period to complete. However, that is the topic of 28362306a36Sopenharmony_cithe next section. 28462306a36Sopenharmony_ci 28562306a36Sopenharmony_ciFunnel Locking and Wait/Wakeup 28662306a36Sopenharmony_ci~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 28762306a36Sopenharmony_ci 28862306a36Sopenharmony_ciThe natural way to sort out which of a batch of updaters will initiate 28962306a36Sopenharmony_cithe expedited grace period is to use the ``rcu_node`` combining tree, as 29062306a36Sopenharmony_ciimplemented by the ``exp_funnel_lock()`` function. The first updater 29162306a36Sopenharmony_cicorresponding to a given grace period arriving at a given ``rcu_node`` 29262306a36Sopenharmony_cistructure records its desired grace-period sequence number in the 29362306a36Sopenharmony_ci``->exp_seq_rq`` field and moves up to the next level in the tree. 29462306a36Sopenharmony_ciOtherwise, if the ``->exp_seq_rq`` field already contains the sequence 29562306a36Sopenharmony_cinumber for the desired grace period or some later one, the updater 29662306a36Sopenharmony_ciblocks on one of four wait queues in the ``->exp_wq[]`` array, using the 29762306a36Sopenharmony_cisecond-from-bottom and third-from bottom bits as an index. An 29862306a36Sopenharmony_ci``->exp_lock`` field in the ``rcu_node`` structure synchronizes access 29962306a36Sopenharmony_cito these fields. 30062306a36Sopenharmony_ci 30162306a36Sopenharmony_ciAn empty ``rcu_node`` tree is shown in the following diagram, with the 30262306a36Sopenharmony_ciwhite cells representing the ``->exp_seq_rq`` field and the red cells 30362306a36Sopenharmony_cirepresenting the elements of the ``->exp_wq[]`` array. 30462306a36Sopenharmony_ci 30562306a36Sopenharmony_ci.. kernel-figure:: Funnel0.svg 30662306a36Sopenharmony_ci 30762306a36Sopenharmony_ciThe next diagram shows the situation after the arrival of Task A and 30862306a36Sopenharmony_ciTask B at the leftmost and rightmost leaf ``rcu_node`` structures, 30962306a36Sopenharmony_cirespectively. The current value of the ``rcu_state`` structure's 31062306a36Sopenharmony_ci``->expedited_sequence`` field is zero, so adding three and clearing the 31162306a36Sopenharmony_cibottom bit results in the value two, which both tasks record in the 31262306a36Sopenharmony_ci``->exp_seq_rq`` field of their respective ``rcu_node`` structures: 31362306a36Sopenharmony_ci 31462306a36Sopenharmony_ci.. kernel-figure:: Funnel1.svg 31562306a36Sopenharmony_ci 31662306a36Sopenharmony_ciEach of Tasks A and B will move up to the root ``rcu_node`` structure. 31762306a36Sopenharmony_ciSuppose that Task A wins, recording its desired grace-period sequence 31862306a36Sopenharmony_cinumber and resulting in the state shown below: 31962306a36Sopenharmony_ci 32062306a36Sopenharmony_ci.. kernel-figure:: Funnel2.svg 32162306a36Sopenharmony_ci 32262306a36Sopenharmony_ciTask A now advances to initiate a new grace period, while Task B moves 32362306a36Sopenharmony_ciup to the root ``rcu_node`` structure, and, seeing that its desired 32462306a36Sopenharmony_cisequence number is already recorded, blocks on ``->exp_wq[1]``. 32562306a36Sopenharmony_ci 32662306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 32762306a36Sopenharmony_ci| **Quick Quiz**: | 32862306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 32962306a36Sopenharmony_ci| Why ``->exp_wq[1]``? Given that the value of these tasks' desired | 33062306a36Sopenharmony_ci| sequence number is two, so shouldn't they instead block on | 33162306a36Sopenharmony_ci| ``->exp_wq[2]``? | 33262306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 33362306a36Sopenharmony_ci| **Answer**: | 33462306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 33562306a36Sopenharmony_ci| No. | 33662306a36Sopenharmony_ci| Recall that the bottom bit of the desired sequence number indicates | 33762306a36Sopenharmony_ci| whether or not a grace period is currently in progress. It is | 33862306a36Sopenharmony_ci| therefore necessary to shift the sequence number right one bit | 33962306a36Sopenharmony_ci| position to obtain the number of the grace period. This results in | 34062306a36Sopenharmony_ci| ``->exp_wq[1]``. | 34162306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 34262306a36Sopenharmony_ci 34362306a36Sopenharmony_ciIf Tasks C and D also arrive at this point, they will compute the same 34462306a36Sopenharmony_cidesired grace-period sequence number, and see that both leaf 34562306a36Sopenharmony_ci``rcu_node`` structures already have that value recorded. They will 34662306a36Sopenharmony_citherefore block on their respective ``rcu_node`` structures' 34762306a36Sopenharmony_ci``->exp_wq[1]`` fields, as shown below: 34862306a36Sopenharmony_ci 34962306a36Sopenharmony_ci.. kernel-figure:: Funnel3.svg 35062306a36Sopenharmony_ci 35162306a36Sopenharmony_ciTask A now acquires the ``rcu_state`` structure's ``->exp_mutex`` and 35262306a36Sopenharmony_ciinitiates the grace period, which increments ``->expedited_sequence``. 35362306a36Sopenharmony_ciTherefore, if Tasks E and F arrive, they will compute a desired sequence 35462306a36Sopenharmony_cinumber of 4 and will record this value as shown below: 35562306a36Sopenharmony_ci 35662306a36Sopenharmony_ci.. kernel-figure:: Funnel4.svg 35762306a36Sopenharmony_ci 35862306a36Sopenharmony_ciTasks E and F will propagate up the ``rcu_node`` combining tree, with 35962306a36Sopenharmony_ciTask F blocking on the root ``rcu_node`` structure and Task E wait for 36062306a36Sopenharmony_ciTask A to finish so that it can start the next grace period. The 36162306a36Sopenharmony_ciresulting state is as shown below: 36262306a36Sopenharmony_ci 36362306a36Sopenharmony_ci.. kernel-figure:: Funnel5.svg 36462306a36Sopenharmony_ci 36562306a36Sopenharmony_ciOnce the grace period completes, Task A starts waking up the tasks 36662306a36Sopenharmony_ciwaiting for this grace period to complete, increments the 36762306a36Sopenharmony_ci``->expedited_sequence``, acquires the ``->exp_wake_mutex`` and then 36862306a36Sopenharmony_cireleases the ``->exp_mutex``. This results in the following state: 36962306a36Sopenharmony_ci 37062306a36Sopenharmony_ci.. kernel-figure:: Funnel6.svg 37162306a36Sopenharmony_ci 37262306a36Sopenharmony_ciTask E can then acquire ``->exp_mutex`` and increment 37362306a36Sopenharmony_ci``->expedited_sequence`` to the value three. If new tasks G and H arrive 37462306a36Sopenharmony_ciand moves up the combining tree at the same time, the state will be as 37562306a36Sopenharmony_cifollows: 37662306a36Sopenharmony_ci 37762306a36Sopenharmony_ci.. kernel-figure:: Funnel7.svg 37862306a36Sopenharmony_ci 37962306a36Sopenharmony_ciNote that three of the root ``rcu_node`` structure's waitqueues are now 38062306a36Sopenharmony_cioccupied. However, at some point, Task A will wake up the tasks blocked 38162306a36Sopenharmony_cion the ``->exp_wq`` waitqueues, resulting in the following state: 38262306a36Sopenharmony_ci 38362306a36Sopenharmony_ci.. kernel-figure:: Funnel8.svg 38462306a36Sopenharmony_ci 38562306a36Sopenharmony_ciExecution will continue with Tasks E and H completing their grace 38662306a36Sopenharmony_ciperiods and carrying out their wakeups. 38762306a36Sopenharmony_ci 38862306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 38962306a36Sopenharmony_ci| **Quick Quiz**: | 39062306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 39162306a36Sopenharmony_ci| What happens if Task A takes so long to do its wakeups that Task E's | 39262306a36Sopenharmony_ci| grace period completes? | 39362306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 39462306a36Sopenharmony_ci| **Answer**: | 39562306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 39662306a36Sopenharmony_ci| Then Task E will block on the ``->exp_wake_mutex``, which will also | 39762306a36Sopenharmony_ci| prevent it from releasing ``->exp_mutex``, which in turn will prevent | 39862306a36Sopenharmony_ci| the next grace period from starting. This last is important in | 39962306a36Sopenharmony_ci| preventing overflow of the ``->exp_wq[]`` array. | 40062306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 40162306a36Sopenharmony_ci 40262306a36Sopenharmony_ciUse of Workqueues 40362306a36Sopenharmony_ci~~~~~~~~~~~~~~~~~ 40462306a36Sopenharmony_ci 40562306a36Sopenharmony_ciIn earlier implementations, the task requesting the expedited grace 40662306a36Sopenharmony_ciperiod also drove it to completion. This straightforward approach had 40762306a36Sopenharmony_cithe disadvantage of needing to account for POSIX signals sent to user 40862306a36Sopenharmony_citasks, so more recent implementations use the Linux kernel's 40962306a36Sopenharmony_ciworkqueues (see Documentation/core-api/workqueue.rst). 41062306a36Sopenharmony_ci 41162306a36Sopenharmony_ciThe requesting task still does counter snapshotting and funnel-lock 41262306a36Sopenharmony_ciprocessing, but the task reaching the top of the funnel lock does a 41362306a36Sopenharmony_ci``schedule_work()`` (from ``_synchronize_rcu_expedited()`` so that a 41462306a36Sopenharmony_ciworkqueue kthread does the actual grace-period processing. Because 41562306a36Sopenharmony_ciworkqueue kthreads do not accept POSIX signals, grace-period-wait 41662306a36Sopenharmony_ciprocessing need not allow for POSIX signals. In addition, this approach 41762306a36Sopenharmony_ciallows wakeups for the previous expedited grace period to be overlapped 41862306a36Sopenharmony_ciwith processing for the next expedited grace period. Because there are 41962306a36Sopenharmony_cionly four sets of waitqueues, it is necessary to ensure that the 42062306a36Sopenharmony_ciprevious grace period's wakeups complete before the next grace period's 42162306a36Sopenharmony_ciwakeups start. This is handled by having the ``->exp_mutex`` guard 42262306a36Sopenharmony_ciexpedited grace-period processing and the ``->exp_wake_mutex`` guard 42362306a36Sopenharmony_ciwakeups. The key point is that the ``->exp_mutex`` is not released until 42462306a36Sopenharmony_cithe first wakeup is complete, which means that the ``->exp_wake_mutex`` 42562306a36Sopenharmony_cihas already been acquired at that point. This approach ensures that the 42662306a36Sopenharmony_ciprevious grace period's wakeups can be carried out while the current 42762306a36Sopenharmony_cigrace period is in process, but that these wakeups will complete before 42862306a36Sopenharmony_cithe next grace period starts. This means that only three waitqueues are 42962306a36Sopenharmony_cirequired, guaranteeing that the four that are provided are sufficient. 43062306a36Sopenharmony_ci 43162306a36Sopenharmony_ciStall Warnings 43262306a36Sopenharmony_ci~~~~~~~~~~~~~~ 43362306a36Sopenharmony_ci 43462306a36Sopenharmony_ciExpediting grace periods does nothing to speed things up when RCU 43562306a36Sopenharmony_cireaders take too long, and therefore expedited grace periods check for 43662306a36Sopenharmony_cistalls just as normal grace periods do. 43762306a36Sopenharmony_ci 43862306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 43962306a36Sopenharmony_ci| **Quick Quiz**: | 44062306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 44162306a36Sopenharmony_ci| But why not just let the normal grace-period machinery detect the | 44262306a36Sopenharmony_ci| stalls, given that a given reader must block both normal and | 44362306a36Sopenharmony_ci| expedited grace periods? | 44462306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 44562306a36Sopenharmony_ci| **Answer**: | 44662306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 44762306a36Sopenharmony_ci| Because it is quite possible that at a given time there is no normal | 44862306a36Sopenharmony_ci| grace period in progress, in which case the normal grace period | 44962306a36Sopenharmony_ci| cannot emit a stall warning. | 45062306a36Sopenharmony_ci+-----------------------------------------------------------------------+ 45162306a36Sopenharmony_ci 45262306a36Sopenharmony_ciThe ``synchronize_sched_expedited_wait()`` function loops waiting for 45362306a36Sopenharmony_cithe expedited grace period to end, but with a timeout set to the current 45462306a36Sopenharmony_ciRCU CPU stall-warning time. If this time is exceeded, any CPUs or 45562306a36Sopenharmony_ci``rcu_node`` structures blocking the current grace period are printed. 45662306a36Sopenharmony_ciEach stall warning results in another pass through the loop, but the 45762306a36Sopenharmony_cisecond and subsequent passes use longer stall times. 45862306a36Sopenharmony_ci 45962306a36Sopenharmony_ciMid-boot operation 46062306a36Sopenharmony_ci~~~~~~~~~~~~~~~~~~ 46162306a36Sopenharmony_ci 46262306a36Sopenharmony_ciThe use of workqueues has the advantage that the expedited grace-period 46362306a36Sopenharmony_cicode need not worry about POSIX signals. Unfortunately, it has the 46462306a36Sopenharmony_cicorresponding disadvantage that workqueues cannot be used until they are 46562306a36Sopenharmony_ciinitialized, which does not happen until some time after the scheduler 46662306a36Sopenharmony_cispawns the first task. Given that there are parts of the kernel that 46762306a36Sopenharmony_cireally do want to execute grace periods during this mid-boot “dead 46862306a36Sopenharmony_cizone”, expedited grace periods must do something else during this time. 46962306a36Sopenharmony_ci 47062306a36Sopenharmony_ciWhat they do is to fall back to the old practice of requiring that the 47162306a36Sopenharmony_cirequesting task drive the expedited grace period, as was the case before 47262306a36Sopenharmony_cithe use of workqueues. However, the requesting task is only required to 47362306a36Sopenharmony_cidrive the grace period during the mid-boot dead zone. Before mid-boot, a 47462306a36Sopenharmony_cisynchronous grace period is a no-op. Some time after mid-boot, 47562306a36Sopenharmony_ciworkqueues are used. 47662306a36Sopenharmony_ci 47762306a36Sopenharmony_ciNon-expedited non-SRCU synchronous grace periods must also operate 47862306a36Sopenharmony_cinormally during mid-boot. This is handled by causing non-expedited grace 47962306a36Sopenharmony_ciperiods to take the expedited code path during mid-boot. 48062306a36Sopenharmony_ci 48162306a36Sopenharmony_ciThe current code assumes that there are no POSIX signals during the 48262306a36Sopenharmony_cimid-boot dead zone. However, if an overwhelming need for POSIX signals 48362306a36Sopenharmony_cisomehow arises, appropriate adjustments can be made to the expedited 48462306a36Sopenharmony_cistall-warning code. One such adjustment would reinstate the 48562306a36Sopenharmony_cipre-workqueue stall-warning checks, but only during the mid-boot dead 48662306a36Sopenharmony_cizone. 48762306a36Sopenharmony_ci 48862306a36Sopenharmony_ciWith this refinement, synchronous grace periods can now be used from 48962306a36Sopenharmony_citask context pretty much any time during the life of the kernel. That 49062306a36Sopenharmony_ciis, aside from some points in the suspend, hibernate, or shutdown code 49162306a36Sopenharmony_cipath. 49262306a36Sopenharmony_ci 49362306a36Sopenharmony_ciSummary 49462306a36Sopenharmony_ci~~~~~~~ 49562306a36Sopenharmony_ci 49662306a36Sopenharmony_ciExpedited grace periods use a sequence-number approach to promote 49762306a36Sopenharmony_cibatching, so that a single grace-period operation can serve numerous 49862306a36Sopenharmony_cirequests. A funnel lock is used to efficiently identify the one task out 49962306a36Sopenharmony_ciof a concurrent group that will request the grace period. All members of 50062306a36Sopenharmony_cithe group will block on waitqueues provided in the ``rcu_node`` 50162306a36Sopenharmony_cistructure. The actual grace-period processing is carried out by a 50262306a36Sopenharmony_ciworkqueue. 50362306a36Sopenharmony_ci 50462306a36Sopenharmony_ciCPU-hotplug operations are noted lazily in order to prevent the need for 50562306a36Sopenharmony_citight synchronization between expedited grace periods and CPU-hotplug 50662306a36Sopenharmony_cioperations. The dyntick-idle counters are used to avoid sending IPIs to 50762306a36Sopenharmony_ciidle CPUs, at least in the common case. RCU-preempt and RCU-sched use 50862306a36Sopenharmony_cidifferent IPI handlers and different code to respond to the state 50962306a36Sopenharmony_cichanges carried out by those handlers, but otherwise use common code. 51062306a36Sopenharmony_ci 51162306a36Sopenharmony_ciQuiescent states are tracked using the ``rcu_node`` tree, and once all 51262306a36Sopenharmony_cinecessary quiescent states have been reported, all tasks waiting on this 51362306a36Sopenharmony_ciexpedited grace period are awakened. A pair of mutexes are used to allow 51462306a36Sopenharmony_cione grace period's wakeups to proceed concurrently with the next grace 51562306a36Sopenharmony_ciperiod's processing. 51662306a36Sopenharmony_ci 51762306a36Sopenharmony_ciThis combination of mechanisms allows expedited grace periods to run 51862306a36Sopenharmony_cireasonably efficiently. However, for non-time-critical tasks, normal 51962306a36Sopenharmony_cigrace periods should be used instead because their longer duration 52062306a36Sopenharmony_cipermits much higher degrees of batching, and thus much lower per-request 52162306a36Sopenharmony_cioverheads. 522