18c2ecf20Sopenharmony_ci=================================================
28c2ecf20Sopenharmony_ciA Tour Through TREE_RCU's Expedited Grace Periods
38c2ecf20Sopenharmony_ci=================================================
48c2ecf20Sopenharmony_ci
58c2ecf20Sopenharmony_ciIntroduction
68c2ecf20Sopenharmony_ci============
78c2ecf20Sopenharmony_ci
88c2ecf20Sopenharmony_ciThis document describes RCU's expedited grace periods.
98c2ecf20Sopenharmony_ciUnlike RCU's normal grace periods, which accept long latencies to attain
108c2ecf20Sopenharmony_cihigh efficiency and minimal disturbance, expedited grace periods accept
118c2ecf20Sopenharmony_cilower efficiency and significant disturbance to attain shorter latencies.
128c2ecf20Sopenharmony_ci
138c2ecf20Sopenharmony_ciThere are two flavors of RCU (RCU-preempt and RCU-sched), with an earlier
148c2ecf20Sopenharmony_cithird RCU-bh flavor having been implemented in terms of the other two.
158c2ecf20Sopenharmony_ciEach of the two implementations is covered in its own section.
168c2ecf20Sopenharmony_ci
178c2ecf20Sopenharmony_ciExpedited Grace Period Design
188c2ecf20Sopenharmony_ci=============================
198c2ecf20Sopenharmony_ci
208c2ecf20Sopenharmony_ciThe expedited RCU grace periods cannot be accused of being subtle,
218c2ecf20Sopenharmony_cigiven that they for all intents and purposes hammer every CPU that
228c2ecf20Sopenharmony_cihas not yet provided a quiescent state for the current expedited
238c2ecf20Sopenharmony_cigrace period.
248c2ecf20Sopenharmony_ciThe one saving grace is that the hammer has grown a bit smaller
258c2ecf20Sopenharmony_ciover time:  The old call to ``try_stop_cpus()`` has been
268c2ecf20Sopenharmony_cireplaced with a set of calls to ``smp_call_function_single()``,
278c2ecf20Sopenharmony_cieach of which results in an IPI to the target CPU.
288c2ecf20Sopenharmony_ciThe corresponding handler function checks the CPU's state, motivating
298c2ecf20Sopenharmony_cia faster quiescent state where possible, and triggering a report
308c2ecf20Sopenharmony_ciof that quiescent state.
318c2ecf20Sopenharmony_ciAs always for RCU, once everything has spent some time in a quiescent
328c2ecf20Sopenharmony_cistate, the expedited grace period has completed.
338c2ecf20Sopenharmony_ci
348c2ecf20Sopenharmony_ciThe details of the ``smp_call_function_single()`` handler's
358c2ecf20Sopenharmony_cioperation depend on the RCU flavor, as described in the following
368c2ecf20Sopenharmony_cisections.
378c2ecf20Sopenharmony_ci
388c2ecf20Sopenharmony_ciRCU-preempt Expedited Grace Periods
398c2ecf20Sopenharmony_ci===================================
408c2ecf20Sopenharmony_ci
418c2ecf20Sopenharmony_ci``CONFIG_PREEMPT=y`` kernels implement RCU-preempt.
428c2ecf20Sopenharmony_ciThe overall flow of the handling of a given CPU by an RCU-preempt
438c2ecf20Sopenharmony_ciexpedited grace period is shown in the following diagram:
448c2ecf20Sopenharmony_ci
458c2ecf20Sopenharmony_ci.. kernel-figure:: ExpRCUFlow.svg
468c2ecf20Sopenharmony_ci
478c2ecf20Sopenharmony_ciThe solid arrows denote direct action, for example, a function call.
488c2ecf20Sopenharmony_ciThe dotted arrows denote indirect action, for example, an IPI
498c2ecf20Sopenharmony_cior a state that is reached after some time.
508c2ecf20Sopenharmony_ci
518c2ecf20Sopenharmony_ciIf a given CPU is offline or idle, ``synchronize_rcu_expedited()``
528c2ecf20Sopenharmony_ciwill ignore it because idle and offline CPUs are already residing
538c2ecf20Sopenharmony_ciin quiescent states.
548c2ecf20Sopenharmony_ciOtherwise, the expedited grace period will use
558c2ecf20Sopenharmony_ci``smp_call_function_single()`` to send the CPU an IPI, which
568c2ecf20Sopenharmony_ciis handled by ``rcu_exp_handler()``.
578c2ecf20Sopenharmony_ci
588c2ecf20Sopenharmony_ciHowever, because this is preemptible RCU, ``rcu_exp_handler()``
598c2ecf20Sopenharmony_cican check to see if the CPU is currently running in an RCU read-side
608c2ecf20Sopenharmony_cicritical section.
618c2ecf20Sopenharmony_ciIf not, the handler can immediately report a quiescent state.
628c2ecf20Sopenharmony_ciOtherwise, it sets flags so that the outermost ``rcu_read_unlock()``
638c2ecf20Sopenharmony_ciinvocation will provide the needed quiescent-state report.
648c2ecf20Sopenharmony_ciThis flag-setting avoids the previous forced preemption of all
658c2ecf20Sopenharmony_ciCPUs that might have RCU read-side critical sections.
668c2ecf20Sopenharmony_ciIn addition, this flag-setting is done so as to avoid increasing
678c2ecf20Sopenharmony_cithe overhead of the common-case fastpath through the scheduler.
688c2ecf20Sopenharmony_ci
698c2ecf20Sopenharmony_ciAgain because this is preemptible RCU, an RCU read-side critical section
708c2ecf20Sopenharmony_cican be preempted.
718c2ecf20Sopenharmony_ciWhen that happens, RCU will enqueue the task, which will the continue to
728c2ecf20Sopenharmony_ciblock the current expedited grace period until it resumes and finds its
738c2ecf20Sopenharmony_cioutermost ``rcu_read_unlock()``.
748c2ecf20Sopenharmony_ciThe CPU will report a quiescent state just after enqueuing the task because
758c2ecf20Sopenharmony_cithe CPU is no longer blocking the grace period.
768c2ecf20Sopenharmony_ciIt is instead the preempted task doing the blocking.
778c2ecf20Sopenharmony_ciThe list of blocked tasks is managed by ``rcu_preempt_ctxt_queue()``,
788c2ecf20Sopenharmony_ciwhich is called from ``rcu_preempt_note_context_switch()``, which
798c2ecf20Sopenharmony_ciin turn is called from ``rcu_note_context_switch()``, which in
808c2ecf20Sopenharmony_citurn is called from the scheduler.
818c2ecf20Sopenharmony_ci
828c2ecf20Sopenharmony_ci
838c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
848c2ecf20Sopenharmony_ci| **Quick Quiz**:                                                       |
858c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
868c2ecf20Sopenharmony_ci| Why not just have the expedited grace period check the state of all   |
878c2ecf20Sopenharmony_ci| the CPUs? After all, that would avoid all those real-time-unfriendly  |
888c2ecf20Sopenharmony_ci| IPIs.                                                                 |
898c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
908c2ecf20Sopenharmony_ci| **Answer**:                                                           |
918c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
928c2ecf20Sopenharmony_ci| Because we want the RCU read-side critical sections to run fast,      |
938c2ecf20Sopenharmony_ci| which means no memory barriers. Therefore, it is not possible to      |
948c2ecf20Sopenharmony_ci| safely check the state from some other CPU. And even if it was        |
958c2ecf20Sopenharmony_ci| possible to safely check the state, it would still be necessary to    |
968c2ecf20Sopenharmony_ci| IPI the CPU to safely interact with the upcoming                      |
978c2ecf20Sopenharmony_ci| ``rcu_read_unlock()`` invocation, which means that the remote state   |
988c2ecf20Sopenharmony_ci| testing would not help the worst-case latency that real-time          |
998c2ecf20Sopenharmony_ci| applications care about.                                              |
1008c2ecf20Sopenharmony_ci|                                                                       |
1018c2ecf20Sopenharmony_ci| One way to prevent your real-time application from getting hit with   |
1028c2ecf20Sopenharmony_ci| these IPIs is to build your kernel with ``CONFIG_NO_HZ_FULL=y``. RCU  |
1038c2ecf20Sopenharmony_ci| would then perceive the CPU running your application as being idle,   |
1048c2ecf20Sopenharmony_ci| and it would be able to safely detect that state without needing to   |
1058c2ecf20Sopenharmony_ci| IPI the CPU.                                                          |
1068c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
1078c2ecf20Sopenharmony_ci
1088c2ecf20Sopenharmony_ciPlease note that this is just the overall flow: Additional complications
1098c2ecf20Sopenharmony_cican arise due to races with CPUs going idle or offline, among other
1108c2ecf20Sopenharmony_cithings.
1118c2ecf20Sopenharmony_ci
1128c2ecf20Sopenharmony_ciRCU-sched Expedited Grace Periods
1138c2ecf20Sopenharmony_ci---------------------------------
1148c2ecf20Sopenharmony_ci
1158c2ecf20Sopenharmony_ci``CONFIG_PREEMPT=n`` kernels implement RCU-sched. The overall flow of
1168c2ecf20Sopenharmony_cithe handling of a given CPU by an RCU-sched expedited grace period is
1178c2ecf20Sopenharmony_cishown in the following diagram:
1188c2ecf20Sopenharmony_ci
1198c2ecf20Sopenharmony_ci.. kernel-figure:: ExpSchedFlow.svg
1208c2ecf20Sopenharmony_ci
1218c2ecf20Sopenharmony_ciAs with RCU-preempt, RCU-sched's ``synchronize_rcu_expedited()`` ignores
1228c2ecf20Sopenharmony_cioffline and idle CPUs, again because they are in remotely detectable
1238c2ecf20Sopenharmony_ciquiescent states. However, because the ``rcu_read_lock_sched()`` and
1248c2ecf20Sopenharmony_ci``rcu_read_unlock_sched()`` leave no trace of their invocation, in
1258c2ecf20Sopenharmony_cigeneral it is not possible to tell whether or not the current CPU is in
1268c2ecf20Sopenharmony_cian RCU read-side critical section. The best that RCU-sched's
1278c2ecf20Sopenharmony_ci``rcu_exp_handler()`` can do is to check for idle, on the off-chance
1288c2ecf20Sopenharmony_cithat the CPU went idle while the IPI was in flight. If the CPU is idle,
1298c2ecf20Sopenharmony_cithen ``rcu_exp_handler()`` reports the quiescent state.
1308c2ecf20Sopenharmony_ci
1318c2ecf20Sopenharmony_ciOtherwise, the handler forces a future context switch by setting the
1328c2ecf20Sopenharmony_ciNEED_RESCHED flag of the current task's thread flag and the CPU preempt
1338c2ecf20Sopenharmony_cicounter. At the time of the context switch, the CPU reports the
1348c2ecf20Sopenharmony_ciquiescent state. Should the CPU go offline first, it will report the
1358c2ecf20Sopenharmony_ciquiescent state at that time.
1368c2ecf20Sopenharmony_ci
1378c2ecf20Sopenharmony_ciExpedited Grace Period and CPU Hotplug
1388c2ecf20Sopenharmony_ci--------------------------------------
1398c2ecf20Sopenharmony_ci
1408c2ecf20Sopenharmony_ciThe expedited nature of expedited grace periods require a much tighter
1418c2ecf20Sopenharmony_ciinteraction with CPU hotplug operations than is required for normal
1428c2ecf20Sopenharmony_cigrace periods. In addition, attempting to IPI offline CPUs will result
1438c2ecf20Sopenharmony_ciin splats, but failing to IPI online CPUs can result in too-short grace
1448c2ecf20Sopenharmony_ciperiods. Neither option is acceptable in production kernels.
1458c2ecf20Sopenharmony_ci
1468c2ecf20Sopenharmony_ciThe interaction between expedited grace periods and CPU hotplug
1478c2ecf20Sopenharmony_cioperations is carried out at several levels:
1488c2ecf20Sopenharmony_ci
1498c2ecf20Sopenharmony_ci#. The number of CPUs that have ever been online is tracked by the
1508c2ecf20Sopenharmony_ci   ``rcu_state`` structure's ``->ncpus`` field. The ``rcu_state``
1518c2ecf20Sopenharmony_ci   structure's ``->ncpus_snap`` field tracks the number of CPUs that
1528c2ecf20Sopenharmony_ci   have ever been online at the beginning of an RCU expedited grace
1538c2ecf20Sopenharmony_ci   period. Note that this number never decreases, at least in the
1548c2ecf20Sopenharmony_ci   absence of a time machine.
1558c2ecf20Sopenharmony_ci#. The identities of the CPUs that have ever been online is tracked by
1568c2ecf20Sopenharmony_ci   the ``rcu_node`` structure's ``->expmaskinitnext`` field. The
1578c2ecf20Sopenharmony_ci   ``rcu_node`` structure's ``->expmaskinit`` field tracks the
1588c2ecf20Sopenharmony_ci   identities of the CPUs that were online at least once at the
1598c2ecf20Sopenharmony_ci   beginning of the most recent RCU expedited grace period. The
1608c2ecf20Sopenharmony_ci   ``rcu_state`` structure's ``->ncpus`` and ``->ncpus_snap`` fields are
1618c2ecf20Sopenharmony_ci   used to detect when new CPUs have come online for the first time,
1628c2ecf20Sopenharmony_ci   that is, when the ``rcu_node`` structure's ``->expmaskinitnext``
1638c2ecf20Sopenharmony_ci   field has changed since the beginning of the last RCU expedited grace
1648c2ecf20Sopenharmony_ci   period, which triggers an update of each ``rcu_node`` structure's
1658c2ecf20Sopenharmony_ci   ``->expmaskinit`` field from its ``->expmaskinitnext`` field.
1668c2ecf20Sopenharmony_ci#. Each ``rcu_node`` structure's ``->expmaskinit`` field is used to
1678c2ecf20Sopenharmony_ci   initialize that structure's ``->expmask`` at the beginning of each
1688c2ecf20Sopenharmony_ci   RCU expedited grace period. This means that only those CPUs that have
1698c2ecf20Sopenharmony_ci   been online at least once will be considered for a given grace
1708c2ecf20Sopenharmony_ci   period.
1718c2ecf20Sopenharmony_ci#. Any CPU that goes offline will clear its bit in its leaf ``rcu_node``
1728c2ecf20Sopenharmony_ci   structure's ``->qsmaskinitnext`` field, so any CPU with that bit
1738c2ecf20Sopenharmony_ci   clear can safely be ignored. However, it is possible for a CPU coming
1748c2ecf20Sopenharmony_ci   online or going offline to have this bit set for some time while
1758c2ecf20Sopenharmony_ci   ``cpu_online`` returns ``false``.
1768c2ecf20Sopenharmony_ci#. For each non-idle CPU that RCU believes is currently online, the
1778c2ecf20Sopenharmony_ci   grace period invokes ``smp_call_function_single()``. If this
1788c2ecf20Sopenharmony_ci   succeeds, the CPU was fully online. Failure indicates that the CPU is
1798c2ecf20Sopenharmony_ci   in the process of coming online or going offline, in which case it is
1808c2ecf20Sopenharmony_ci   necessary to wait for a short time period and try again. The purpose
1818c2ecf20Sopenharmony_ci   of this wait (or series of waits, as the case may be) is to permit a
1828c2ecf20Sopenharmony_ci   concurrent CPU-hotplug operation to complete.
1838c2ecf20Sopenharmony_ci#. In the case of RCU-sched, one of the last acts of an outgoing CPU is
1848c2ecf20Sopenharmony_ci   to invoke ``rcu_report_dead()``, which reports a quiescent state for
1858c2ecf20Sopenharmony_ci   that CPU. However, this is likely paranoia-induced redundancy.
1868c2ecf20Sopenharmony_ci
1878c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
1888c2ecf20Sopenharmony_ci| **Quick Quiz**:                                                       |
1898c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
1908c2ecf20Sopenharmony_ci| Why all the dancing around with multiple counters and masks tracking  |
1918c2ecf20Sopenharmony_ci| CPUs that were once online? Why not just have a single set of masks   |
1928c2ecf20Sopenharmony_ci| tracking the currently online CPUs and be done with it?               |
1938c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
1948c2ecf20Sopenharmony_ci| **Answer**:                                                           |
1958c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
1968c2ecf20Sopenharmony_ci| Maintaining single set of masks tracking the online CPUs *sounds*     |
1978c2ecf20Sopenharmony_ci| easier, at least until you try working out all the race conditions    |
1988c2ecf20Sopenharmony_ci| between grace-period initialization and CPU-hotplug operations. For   |
1998c2ecf20Sopenharmony_ci| example, suppose initialization is progressing down the tree while a  |
2008c2ecf20Sopenharmony_ci| CPU-offline operation is progressing up the tree. This situation can  |
2018c2ecf20Sopenharmony_ci| result in bits set at the top of the tree that have no counterparts   |
2028c2ecf20Sopenharmony_ci| at the bottom of the tree. Those bits will never be cleared, which    |
2038c2ecf20Sopenharmony_ci| will result in grace-period hangs. In short, that way lies madness,   |
2048c2ecf20Sopenharmony_ci| to say nothing of a great many bugs, hangs, and deadlocks.            |
2058c2ecf20Sopenharmony_ci| In contrast, the current multi-mask multi-counter scheme ensures that |
2068c2ecf20Sopenharmony_ci| grace-period initialization will always see consistent masks up and   |
2078c2ecf20Sopenharmony_ci| down the tree, which brings significant simplifications over the      |
2088c2ecf20Sopenharmony_ci| single-mask method.                                                   |
2098c2ecf20Sopenharmony_ci|                                                                       |
2108c2ecf20Sopenharmony_ci| This is an instance of `deferring work in order to avoid              |
2118c2ecf20Sopenharmony_ci| synchronization <http://www.cs.columbia.edu/~library/TR-repository/re |
2128c2ecf20Sopenharmony_ci| ports/reports-1992/cucs-039-92.ps.gz>`__.                             |
2138c2ecf20Sopenharmony_ci| Lazily recording CPU-hotplug events at the beginning of the next      |
2148c2ecf20Sopenharmony_ci| grace period greatly simplifies maintenance of the CPU-tracking       |
2158c2ecf20Sopenharmony_ci| bitmasks in the ``rcu_node`` tree.                                    |
2168c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
2178c2ecf20Sopenharmony_ci
2188c2ecf20Sopenharmony_ciExpedited Grace Period Refinements
2198c2ecf20Sopenharmony_ci----------------------------------
2208c2ecf20Sopenharmony_ci
2218c2ecf20Sopenharmony_ciIdle-CPU Checks
2228c2ecf20Sopenharmony_ci~~~~~~~~~~~~~~~
2238c2ecf20Sopenharmony_ci
2248c2ecf20Sopenharmony_ciEach expedited grace period checks for idle CPUs when initially forming
2258c2ecf20Sopenharmony_cithe mask of CPUs to be IPIed and again just before IPIing a CPU (both
2268c2ecf20Sopenharmony_cichecks are carried out by ``sync_rcu_exp_select_cpus()``). If the CPU is
2278c2ecf20Sopenharmony_ciidle at any time between those two times, the CPU will not be IPIed.
2288c2ecf20Sopenharmony_ciInstead, the task pushing the grace period forward will include the idle
2298c2ecf20Sopenharmony_ciCPUs in the mask passed to ``rcu_report_exp_cpu_mult()``.
2308c2ecf20Sopenharmony_ci
2318c2ecf20Sopenharmony_ciFor RCU-sched, there is an additional check: If the IPI has interrupted
2328c2ecf20Sopenharmony_cithe idle loop, then ``rcu_exp_handler()`` invokes
2338c2ecf20Sopenharmony_ci``rcu_report_exp_rdp()`` to report the corresponding quiescent state.
2348c2ecf20Sopenharmony_ci
2358c2ecf20Sopenharmony_ciFor RCU-preempt, there is no specific check for idle in the IPI handler
2368c2ecf20Sopenharmony_ci(``rcu_exp_handler()``), but because RCU read-side critical sections are
2378c2ecf20Sopenharmony_cinot permitted within the idle loop, if ``rcu_exp_handler()`` sees that
2388c2ecf20Sopenharmony_cithe CPU is within RCU read-side critical section, the CPU cannot
2398c2ecf20Sopenharmony_cipossibly be idle. Otherwise, ``rcu_exp_handler()`` invokes
2408c2ecf20Sopenharmony_ci``rcu_report_exp_rdp()`` to report the corresponding quiescent state,
2418c2ecf20Sopenharmony_ciregardless of whether or not that quiescent state was due to the CPU
2428c2ecf20Sopenharmony_cibeing idle.
2438c2ecf20Sopenharmony_ci
2448c2ecf20Sopenharmony_ciIn summary, RCU expedited grace periods check for idle when building the
2458c2ecf20Sopenharmony_cibitmask of CPUs that must be IPIed, just before sending each IPI, and
2468c2ecf20Sopenharmony_ci(either explicitly or implicitly) within the IPI handler.
2478c2ecf20Sopenharmony_ci
2488c2ecf20Sopenharmony_ciBatching via Sequence Counter
2498c2ecf20Sopenharmony_ci~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2508c2ecf20Sopenharmony_ci
2518c2ecf20Sopenharmony_ciIf each grace-period request was carried out separately, expedited grace
2528c2ecf20Sopenharmony_ciperiods would have abysmal scalability and problematic high-load
2538c2ecf20Sopenharmony_cicharacteristics. Because each grace-period operation can serve an
2548c2ecf20Sopenharmony_ciunlimited number of updates, it is important to *batch* requests, so
2558c2ecf20Sopenharmony_cithat a single expedited grace-period operation will cover all requests
2568c2ecf20Sopenharmony_ciin the corresponding batch.
2578c2ecf20Sopenharmony_ci
2588c2ecf20Sopenharmony_ciThis batching is controlled by a sequence counter named
2598c2ecf20Sopenharmony_ci``->expedited_sequence`` in the ``rcu_state`` structure. This counter
2608c2ecf20Sopenharmony_cihas an odd value when there is an expedited grace period in progress and
2618c2ecf20Sopenharmony_cian even value otherwise, so that dividing the counter value by two gives
2628c2ecf20Sopenharmony_cithe number of completed grace periods. During any given update request,
2638c2ecf20Sopenharmony_cithe counter must transition from even to odd and then back to even, thus
2648c2ecf20Sopenharmony_ciindicating that a grace period has elapsed. Therefore, if the initial
2658c2ecf20Sopenharmony_civalue of the counter is ``s``, the updater must wait until the counter
2668c2ecf20Sopenharmony_cireaches at least the value ``(s+3)&~0x1``. This counter is managed by
2678c2ecf20Sopenharmony_cithe following access functions:
2688c2ecf20Sopenharmony_ci
2698c2ecf20Sopenharmony_ci#. ``rcu_exp_gp_seq_start()``, which marks the start of an expedited
2708c2ecf20Sopenharmony_ci   grace period.
2718c2ecf20Sopenharmony_ci#. ``rcu_exp_gp_seq_end()``, which marks the end of an expedited grace
2728c2ecf20Sopenharmony_ci   period.
2738c2ecf20Sopenharmony_ci#. ``rcu_exp_gp_seq_snap()``, which obtains a snapshot of the counter.
2748c2ecf20Sopenharmony_ci#. ``rcu_exp_gp_seq_done()``, which returns ``true`` if a full expedited
2758c2ecf20Sopenharmony_ci   grace period has elapsed since the corresponding call to
2768c2ecf20Sopenharmony_ci   ``rcu_exp_gp_seq_snap()``.
2778c2ecf20Sopenharmony_ci
2788c2ecf20Sopenharmony_ciAgain, only one request in a given batch need actually carry out a
2798c2ecf20Sopenharmony_cigrace-period operation, which means there must be an efficient way to
2808c2ecf20Sopenharmony_ciidentify which of many concurrent reqeusts will initiate the grace
2818c2ecf20Sopenharmony_ciperiod, and that there be an efficient way for the remaining requests to
2828c2ecf20Sopenharmony_ciwait for that grace period to complete. However, that is the topic of
2838c2ecf20Sopenharmony_cithe next section.
2848c2ecf20Sopenharmony_ci
2858c2ecf20Sopenharmony_ciFunnel Locking and Wait/Wakeup
2868c2ecf20Sopenharmony_ci~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2878c2ecf20Sopenharmony_ci
2888c2ecf20Sopenharmony_ciThe natural way to sort out which of a batch of updaters will initiate
2898c2ecf20Sopenharmony_cithe expedited grace period is to use the ``rcu_node`` combining tree, as
2908c2ecf20Sopenharmony_ciimplemented by the ``exp_funnel_lock()`` function. The first updater
2918c2ecf20Sopenharmony_cicorresponding to a given grace period arriving at a given ``rcu_node``
2928c2ecf20Sopenharmony_cistructure records its desired grace-period sequence number in the
2938c2ecf20Sopenharmony_ci``->exp_seq_rq`` field and moves up to the next level in the tree.
2948c2ecf20Sopenharmony_ciOtherwise, if the ``->exp_seq_rq`` field already contains the sequence
2958c2ecf20Sopenharmony_cinumber for the desired grace period or some later one, the updater
2968c2ecf20Sopenharmony_ciblocks on one of four wait queues in the ``->exp_wq[]`` array, using the
2978c2ecf20Sopenharmony_cisecond-from-bottom and third-from bottom bits as an index. An
2988c2ecf20Sopenharmony_ci``->exp_lock`` field in the ``rcu_node`` structure synchronizes access
2998c2ecf20Sopenharmony_cito these fields.
3008c2ecf20Sopenharmony_ci
3018c2ecf20Sopenharmony_ciAn empty ``rcu_node`` tree is shown in the following diagram, with the
3028c2ecf20Sopenharmony_ciwhite cells representing the ``->exp_seq_rq`` field and the red cells
3038c2ecf20Sopenharmony_cirepresenting the elements of the ``->exp_wq[]`` array.
3048c2ecf20Sopenharmony_ci
3058c2ecf20Sopenharmony_ci.. kernel-figure:: Funnel0.svg
3068c2ecf20Sopenharmony_ci
3078c2ecf20Sopenharmony_ciThe next diagram shows the situation after the arrival of Task A and
3088c2ecf20Sopenharmony_ciTask B at the leftmost and rightmost leaf ``rcu_node`` structures,
3098c2ecf20Sopenharmony_cirespectively. The current value of the ``rcu_state`` structure's
3108c2ecf20Sopenharmony_ci``->expedited_sequence`` field is zero, so adding three and clearing the
3118c2ecf20Sopenharmony_cibottom bit results in the value two, which both tasks record in the
3128c2ecf20Sopenharmony_ci``->exp_seq_rq`` field of their respective ``rcu_node`` structures:
3138c2ecf20Sopenharmony_ci
3148c2ecf20Sopenharmony_ci.. kernel-figure:: Funnel1.svg
3158c2ecf20Sopenharmony_ci
3168c2ecf20Sopenharmony_ciEach of Tasks A and B will move up to the root ``rcu_node`` structure.
3178c2ecf20Sopenharmony_ciSuppose that Task A wins, recording its desired grace-period sequence
3188c2ecf20Sopenharmony_cinumber and resulting in the state shown below:
3198c2ecf20Sopenharmony_ci
3208c2ecf20Sopenharmony_ci.. kernel-figure:: Funnel2.svg
3218c2ecf20Sopenharmony_ci
3228c2ecf20Sopenharmony_ciTask A now advances to initiate a new grace period, while Task B moves
3238c2ecf20Sopenharmony_ciup to the root ``rcu_node`` structure, and, seeing that its desired
3248c2ecf20Sopenharmony_cisequence number is already recorded, blocks on ``->exp_wq[1]``.
3258c2ecf20Sopenharmony_ci
3268c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
3278c2ecf20Sopenharmony_ci| **Quick Quiz**:                                                       |
3288c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
3298c2ecf20Sopenharmony_ci| Why ``->exp_wq[1]``? Given that the value of these tasks' desired     |
3308c2ecf20Sopenharmony_ci| sequence number is two, so shouldn't they instead block on            |
3318c2ecf20Sopenharmony_ci| ``->exp_wq[2]``?                                                      |
3328c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
3338c2ecf20Sopenharmony_ci| **Answer**:                                                           |
3348c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
3358c2ecf20Sopenharmony_ci| No.                                                                   |
3368c2ecf20Sopenharmony_ci| Recall that the bottom bit of the desired sequence number indicates   |
3378c2ecf20Sopenharmony_ci| whether or not a grace period is currently in progress. It is         |
3388c2ecf20Sopenharmony_ci| therefore necessary to shift the sequence number right one bit        |
3398c2ecf20Sopenharmony_ci| position to obtain the number of the grace period. This results in    |
3408c2ecf20Sopenharmony_ci| ``->exp_wq[1]``.                                                      |
3418c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
3428c2ecf20Sopenharmony_ci
3438c2ecf20Sopenharmony_ciIf Tasks C and D also arrive at this point, they will compute the same
3448c2ecf20Sopenharmony_cidesired grace-period sequence number, and see that both leaf
3458c2ecf20Sopenharmony_ci``rcu_node`` structures already have that value recorded. They will
3468c2ecf20Sopenharmony_citherefore block on their respective ``rcu_node`` structures'
3478c2ecf20Sopenharmony_ci``->exp_wq[1]`` fields, as shown below:
3488c2ecf20Sopenharmony_ci
3498c2ecf20Sopenharmony_ci.. kernel-figure:: Funnel3.svg
3508c2ecf20Sopenharmony_ci
3518c2ecf20Sopenharmony_ciTask A now acquires the ``rcu_state`` structure's ``->exp_mutex`` and
3528c2ecf20Sopenharmony_ciinitiates the grace period, which increments ``->expedited_sequence``.
3538c2ecf20Sopenharmony_ciTherefore, if Tasks E and F arrive, they will compute a desired sequence
3548c2ecf20Sopenharmony_cinumber of 4 and will record this value as shown below:
3558c2ecf20Sopenharmony_ci
3568c2ecf20Sopenharmony_ci.. kernel-figure:: Funnel4.svg
3578c2ecf20Sopenharmony_ci
3588c2ecf20Sopenharmony_ciTasks E and F will propagate up the ``rcu_node`` combining tree, with
3598c2ecf20Sopenharmony_ciTask F blocking on the root ``rcu_node`` structure and Task E wait for
3608c2ecf20Sopenharmony_ciTask A to finish so that it can start the next grace period. The
3618c2ecf20Sopenharmony_ciresulting state is as shown below:
3628c2ecf20Sopenharmony_ci
3638c2ecf20Sopenharmony_ci.. kernel-figure:: Funnel5.svg
3648c2ecf20Sopenharmony_ci
3658c2ecf20Sopenharmony_ciOnce the grace period completes, Task A starts waking up the tasks
3668c2ecf20Sopenharmony_ciwaiting for this grace period to complete, increments the
3678c2ecf20Sopenharmony_ci``->expedited_sequence``, acquires the ``->exp_wake_mutex`` and then
3688c2ecf20Sopenharmony_cireleases the ``->exp_mutex``. This results in the following state:
3698c2ecf20Sopenharmony_ci
3708c2ecf20Sopenharmony_ci.. kernel-figure:: Funnel6.svg
3718c2ecf20Sopenharmony_ci
3728c2ecf20Sopenharmony_ciTask E can then acquire ``->exp_mutex`` and increment
3738c2ecf20Sopenharmony_ci``->expedited_sequence`` to the value three. If new tasks G and H arrive
3748c2ecf20Sopenharmony_ciand moves up the combining tree at the same time, the state will be as
3758c2ecf20Sopenharmony_cifollows:
3768c2ecf20Sopenharmony_ci
3778c2ecf20Sopenharmony_ci.. kernel-figure:: Funnel7.svg
3788c2ecf20Sopenharmony_ci
3798c2ecf20Sopenharmony_ciNote that three of the root ``rcu_node`` structure's waitqueues are now
3808c2ecf20Sopenharmony_cioccupied. However, at some point, Task A will wake up the tasks blocked
3818c2ecf20Sopenharmony_cion the ``->exp_wq`` waitqueues, resulting in the following state:
3828c2ecf20Sopenharmony_ci
3838c2ecf20Sopenharmony_ci.. kernel-figure:: Funnel8.svg
3848c2ecf20Sopenharmony_ci
3858c2ecf20Sopenharmony_ciExecution will continue with Tasks E and H completing their grace
3868c2ecf20Sopenharmony_ciperiods and carrying out their wakeups.
3878c2ecf20Sopenharmony_ci
3888c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
3898c2ecf20Sopenharmony_ci| **Quick Quiz**:                                                       |
3908c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
3918c2ecf20Sopenharmony_ci| What happens if Task A takes so long to do its wakeups that Task E's  |
3928c2ecf20Sopenharmony_ci| grace period completes?                                               |
3938c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
3948c2ecf20Sopenharmony_ci| **Answer**:                                                           |
3958c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
3968c2ecf20Sopenharmony_ci| Then Task E will block on the ``->exp_wake_mutex``, which will also   |
3978c2ecf20Sopenharmony_ci| prevent it from releasing ``->exp_mutex``, which in turn will prevent |
3988c2ecf20Sopenharmony_ci| the next grace period from starting. This last is important in        |
3998c2ecf20Sopenharmony_ci| preventing overflow of the ``->exp_wq[]`` array.                      |
4008c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
4018c2ecf20Sopenharmony_ci
4028c2ecf20Sopenharmony_ciUse of Workqueues
4038c2ecf20Sopenharmony_ci~~~~~~~~~~~~~~~~~
4048c2ecf20Sopenharmony_ci
4058c2ecf20Sopenharmony_ciIn earlier implementations, the task requesting the expedited grace
4068c2ecf20Sopenharmony_ciperiod also drove it to completion. This straightforward approach had
4078c2ecf20Sopenharmony_cithe disadvantage of needing to account for POSIX signals sent to user
4088c2ecf20Sopenharmony_citasks, so more recent implemementations use the Linux kernel's
4098c2ecf20Sopenharmony_ci`workqueues <https://www.kernel.org/doc/Documentation/core-api/workqueue.rst>`__.
4108c2ecf20Sopenharmony_ci
4118c2ecf20Sopenharmony_ciThe requesting task still does counter snapshotting and funnel-lock
4128c2ecf20Sopenharmony_ciprocessing, but the task reaching the top of the funnel lock does a
4138c2ecf20Sopenharmony_ci``schedule_work()`` (from ``_synchronize_rcu_expedited()`` so that a
4148c2ecf20Sopenharmony_ciworkqueue kthread does the actual grace-period processing. Because
4158c2ecf20Sopenharmony_ciworkqueue kthreads do not accept POSIX signals, grace-period-wait
4168c2ecf20Sopenharmony_ciprocessing need not allow for POSIX signals. In addition, this approach
4178c2ecf20Sopenharmony_ciallows wakeups for the previous expedited grace period to be overlapped
4188c2ecf20Sopenharmony_ciwith processing for the next expedited grace period. Because there are
4198c2ecf20Sopenharmony_cionly four sets of waitqueues, it is necessary to ensure that the
4208c2ecf20Sopenharmony_ciprevious grace period's wakeups complete before the next grace period's
4218c2ecf20Sopenharmony_ciwakeups start. This is handled by having the ``->exp_mutex`` guard
4228c2ecf20Sopenharmony_ciexpedited grace-period processing and the ``->exp_wake_mutex`` guard
4238c2ecf20Sopenharmony_ciwakeups. The key point is that the ``->exp_mutex`` is not released until
4248c2ecf20Sopenharmony_cithe first wakeup is complete, which means that the ``->exp_wake_mutex``
4258c2ecf20Sopenharmony_cihas already been acquired at that point. This approach ensures that the
4268c2ecf20Sopenharmony_ciprevious grace period's wakeups can be carried out while the current
4278c2ecf20Sopenharmony_cigrace period is in process, but that these wakeups will complete before
4288c2ecf20Sopenharmony_cithe next grace period starts. This means that only three waitqueues are
4298c2ecf20Sopenharmony_cirequired, guaranteeing that the four that are provided are sufficient.
4308c2ecf20Sopenharmony_ci
4318c2ecf20Sopenharmony_ciStall Warnings
4328c2ecf20Sopenharmony_ci~~~~~~~~~~~~~~
4338c2ecf20Sopenharmony_ci
4348c2ecf20Sopenharmony_ciExpediting grace periods does nothing to speed things up when RCU
4358c2ecf20Sopenharmony_cireaders take too long, and therefore expedited grace periods check for
4368c2ecf20Sopenharmony_cistalls just as normal grace periods do.
4378c2ecf20Sopenharmony_ci
4388c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
4398c2ecf20Sopenharmony_ci| **Quick Quiz**:                                                       |
4408c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
4418c2ecf20Sopenharmony_ci| But why not just let the normal grace-period machinery detect the     |
4428c2ecf20Sopenharmony_ci| stalls, given that a given reader must block both normal and          |
4438c2ecf20Sopenharmony_ci| expedited grace periods?                                              |
4448c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
4458c2ecf20Sopenharmony_ci| **Answer**:                                                           |
4468c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
4478c2ecf20Sopenharmony_ci| Because it is quite possible that at a given time there is no normal  |
4488c2ecf20Sopenharmony_ci| grace period in progress, in which case the normal grace period       |
4498c2ecf20Sopenharmony_ci| cannot emit a stall warning.                                          |
4508c2ecf20Sopenharmony_ci+-----------------------------------------------------------------------+
4518c2ecf20Sopenharmony_ci
4528c2ecf20Sopenharmony_ciThe ``synchronize_sched_expedited_wait()`` function loops waiting for
4538c2ecf20Sopenharmony_cithe expedited grace period to end, but with a timeout set to the current
4548c2ecf20Sopenharmony_ciRCU CPU stall-warning time. If this time is exceeded, any CPUs or
4558c2ecf20Sopenharmony_ci``rcu_node`` structures blocking the current grace period are printed.
4568c2ecf20Sopenharmony_ciEach stall warning results in another pass through the loop, but the
4578c2ecf20Sopenharmony_cisecond and subsequent passes use longer stall times.
4588c2ecf20Sopenharmony_ci
4598c2ecf20Sopenharmony_ciMid-boot operation
4608c2ecf20Sopenharmony_ci~~~~~~~~~~~~~~~~~~
4618c2ecf20Sopenharmony_ci
4628c2ecf20Sopenharmony_ciThe use of workqueues has the advantage that the expedited grace-period
4638c2ecf20Sopenharmony_cicode need not worry about POSIX signals. Unfortunately, it has the
4648c2ecf20Sopenharmony_cicorresponding disadvantage that workqueues cannot be used until they are
4658c2ecf20Sopenharmony_ciinitialized, which does not happen until some time after the scheduler
4668c2ecf20Sopenharmony_cispawns the first task. Given that there are parts of the kernel that
4678c2ecf20Sopenharmony_cireally do want to execute grace periods during this mid-boot “dead
4688c2ecf20Sopenharmony_cizone”, expedited grace periods must do something else during thie time.
4698c2ecf20Sopenharmony_ci
4708c2ecf20Sopenharmony_ciWhat they do is to fall back to the old practice of requiring that the
4718c2ecf20Sopenharmony_cirequesting task drive the expedited grace period, as was the case before
4728c2ecf20Sopenharmony_cithe use of workqueues. However, the requesting task is only required to
4738c2ecf20Sopenharmony_cidrive the grace period during the mid-boot dead zone. Before mid-boot, a
4748c2ecf20Sopenharmony_cisynchronous grace period is a no-op. Some time after mid-boot,
4758c2ecf20Sopenharmony_ciworkqueues are used.
4768c2ecf20Sopenharmony_ci
4778c2ecf20Sopenharmony_ciNon-expedited non-SRCU synchronous grace periods must also operate
4788c2ecf20Sopenharmony_cinormally during mid-boot. This is handled by causing non-expedited grace
4798c2ecf20Sopenharmony_ciperiods to take the expedited code path during mid-boot.
4808c2ecf20Sopenharmony_ci
4818c2ecf20Sopenharmony_ciThe current code assumes that there are no POSIX signals during the
4828c2ecf20Sopenharmony_cimid-boot dead zone. However, if an overwhelming need for POSIX signals
4838c2ecf20Sopenharmony_cisomehow arises, appropriate adjustments can be made to the expedited
4848c2ecf20Sopenharmony_cistall-warning code. One such adjustment would reinstate the
4858c2ecf20Sopenharmony_cipre-workqueue stall-warning checks, but only during the mid-boot dead
4868c2ecf20Sopenharmony_cizone.
4878c2ecf20Sopenharmony_ci
4888c2ecf20Sopenharmony_ciWith this refinement, synchronous grace periods can now be used from
4898c2ecf20Sopenharmony_citask context pretty much any time during the life of the kernel. That
4908c2ecf20Sopenharmony_ciis, aside from some points in the suspend, hibernate, or shutdown code
4918c2ecf20Sopenharmony_cipath.
4928c2ecf20Sopenharmony_ci
4938c2ecf20Sopenharmony_ciSummary
4948c2ecf20Sopenharmony_ci~~~~~~~
4958c2ecf20Sopenharmony_ci
4968c2ecf20Sopenharmony_ciExpedited grace periods use a sequence-number approach to promote
4978c2ecf20Sopenharmony_cibatching, so that a single grace-period operation can serve numerous
4988c2ecf20Sopenharmony_cirequests. A funnel lock is used to efficiently identify the one task out
4998c2ecf20Sopenharmony_ciof a concurrent group that will request the grace period. All members of
5008c2ecf20Sopenharmony_cithe group will block on waitqueues provided in the ``rcu_node``
5018c2ecf20Sopenharmony_cistructure. The actual grace-period processing is carried out by a
5028c2ecf20Sopenharmony_ciworkqueue.
5038c2ecf20Sopenharmony_ci
5048c2ecf20Sopenharmony_ciCPU-hotplug operations are noted lazily in order to prevent the need for
5058c2ecf20Sopenharmony_citight synchronization between expedited grace periods and CPU-hotplug
5068c2ecf20Sopenharmony_cioperations. The dyntick-idle counters are used to avoid sending IPIs to
5078c2ecf20Sopenharmony_ciidle CPUs, at least in the common case. RCU-preempt and RCU-sched use
5088c2ecf20Sopenharmony_cidifferent IPI handlers and different code to respond to the state
5098c2ecf20Sopenharmony_cichanges carried out by those handlers, but otherwise use common code.
5108c2ecf20Sopenharmony_ci
5118c2ecf20Sopenharmony_ciQuiescent states are tracked using the ``rcu_node`` tree, and once all
5128c2ecf20Sopenharmony_cinecessary quiescent states have been reported, all tasks waiting on this
5138c2ecf20Sopenharmony_ciexpedited grace period are awakened. A pair of mutexes are used to allow
5148c2ecf20Sopenharmony_cione grace period's wakeups to proceed concurrently with the next grace
5158c2ecf20Sopenharmony_ciperiod's processing.
5168c2ecf20Sopenharmony_ci
5178c2ecf20Sopenharmony_ciThis combination of mechanisms allows expedited grace periods to run
5188c2ecf20Sopenharmony_cireasonably efficiently. However, for non-time-critical tasks, normal
5198c2ecf20Sopenharmony_cigrace periods should be used instead because their longer duration
5208c2ecf20Sopenharmony_cipermits much higher degrees of batching, and thus much lower per-request
5218c2ecf20Sopenharmony_cioverheads.
522