162306a36Sopenharmony_ci			 ============================
262306a36Sopenharmony_ci			 LINUX KERNEL MEMORY BARRIERS
362306a36Sopenharmony_ci			 ============================
462306a36Sopenharmony_ci
562306a36Sopenharmony_ciBy: David Howells <dhowells@redhat.com>
662306a36Sopenharmony_ci    Paul E. McKenney <paulmck@linux.ibm.com>
762306a36Sopenharmony_ci    Will Deacon <will.deacon@arm.com>
862306a36Sopenharmony_ci    Peter Zijlstra <peterz@infradead.org>
962306a36Sopenharmony_ci
1062306a36Sopenharmony_ci==========
1162306a36Sopenharmony_ciDISCLAIMER
1262306a36Sopenharmony_ci==========
1362306a36Sopenharmony_ci
1462306a36Sopenharmony_ciThis document is not a specification; it is intentionally (for the sake of
1562306a36Sopenharmony_cibrevity) and unintentionally (due to being human) incomplete. This document is
1662306a36Sopenharmony_cimeant as a guide to using the various memory barriers provided by Linux, but
1762306a36Sopenharmony_ciin case of any doubt (and there are many) please ask.  Some doubts may be
1862306a36Sopenharmony_ciresolved by referring to the formal memory consistency model and related
1962306a36Sopenharmony_cidocumentation at tools/memory-model/.  Nevertheless, even this memory
2062306a36Sopenharmony_cimodel should be viewed as the collective opinion of its maintainers rather
2162306a36Sopenharmony_cithan as an infallible oracle.
2262306a36Sopenharmony_ci
2362306a36Sopenharmony_ciTo repeat, this document is not a specification of what Linux expects from
2462306a36Sopenharmony_cihardware.
2562306a36Sopenharmony_ci
2662306a36Sopenharmony_ciThe purpose of this document is twofold:
2762306a36Sopenharmony_ci
2862306a36Sopenharmony_ci (1) to specify the minimum functionality that one can rely on for any
2962306a36Sopenharmony_ci     particular barrier, and
3062306a36Sopenharmony_ci
3162306a36Sopenharmony_ci (2) to provide a guide as to how to use the barriers that are available.
3262306a36Sopenharmony_ci
3362306a36Sopenharmony_ciNote that an architecture can provide more than the minimum requirement
3462306a36Sopenharmony_cifor any particular barrier, but if the architecture provides less than
3562306a36Sopenharmony_cithat, that architecture is incorrect.
3662306a36Sopenharmony_ci
3762306a36Sopenharmony_ciNote also that it is possible that a barrier may be a no-op for an
3862306a36Sopenharmony_ciarchitecture because the way that arch works renders an explicit barrier
3962306a36Sopenharmony_ciunnecessary in that case.
4062306a36Sopenharmony_ci
4162306a36Sopenharmony_ci
4262306a36Sopenharmony_ci========
4362306a36Sopenharmony_ciCONTENTS
4462306a36Sopenharmony_ci========
4562306a36Sopenharmony_ci
4662306a36Sopenharmony_ci (*) Abstract memory access model.
4762306a36Sopenharmony_ci
4862306a36Sopenharmony_ci     - Device operations.
4962306a36Sopenharmony_ci     - Guarantees.
5062306a36Sopenharmony_ci
5162306a36Sopenharmony_ci (*) What are memory barriers?
5262306a36Sopenharmony_ci
5362306a36Sopenharmony_ci     - Varieties of memory barrier.
5462306a36Sopenharmony_ci     - What may not be assumed about memory barriers?
5562306a36Sopenharmony_ci     - Address-dependency barriers (historical).
5662306a36Sopenharmony_ci     - Control dependencies.
5762306a36Sopenharmony_ci     - SMP barrier pairing.
5862306a36Sopenharmony_ci     - Examples of memory barrier sequences.
5962306a36Sopenharmony_ci     - Read memory barriers vs load speculation.
6062306a36Sopenharmony_ci     - Multicopy atomicity.
6162306a36Sopenharmony_ci
6262306a36Sopenharmony_ci (*) Explicit kernel barriers.
6362306a36Sopenharmony_ci
6462306a36Sopenharmony_ci     - Compiler barrier.
6562306a36Sopenharmony_ci     - CPU memory barriers.
6662306a36Sopenharmony_ci
6762306a36Sopenharmony_ci (*) Implicit kernel memory barriers.
6862306a36Sopenharmony_ci
6962306a36Sopenharmony_ci     - Lock acquisition functions.
7062306a36Sopenharmony_ci     - Interrupt disabling functions.
7162306a36Sopenharmony_ci     - Sleep and wake-up functions.
7262306a36Sopenharmony_ci     - Miscellaneous functions.
7362306a36Sopenharmony_ci
7462306a36Sopenharmony_ci (*) Inter-CPU acquiring barrier effects.
7562306a36Sopenharmony_ci
7662306a36Sopenharmony_ci     - Acquires vs memory accesses.
7762306a36Sopenharmony_ci
7862306a36Sopenharmony_ci (*) Where are memory barriers needed?
7962306a36Sopenharmony_ci
8062306a36Sopenharmony_ci     - Interprocessor interaction.
8162306a36Sopenharmony_ci     - Atomic operations.
8262306a36Sopenharmony_ci     - Accessing devices.
8362306a36Sopenharmony_ci     - Interrupts.
8462306a36Sopenharmony_ci
8562306a36Sopenharmony_ci (*) Kernel I/O barrier effects.
8662306a36Sopenharmony_ci
8762306a36Sopenharmony_ci (*) Assumed minimum execution ordering model.
8862306a36Sopenharmony_ci
8962306a36Sopenharmony_ci (*) The effects of the cpu cache.
9062306a36Sopenharmony_ci
9162306a36Sopenharmony_ci     - Cache coherency.
9262306a36Sopenharmony_ci     - Cache coherency vs DMA.
9362306a36Sopenharmony_ci     - Cache coherency vs MMIO.
9462306a36Sopenharmony_ci
9562306a36Sopenharmony_ci (*) The things CPUs get up to.
9662306a36Sopenharmony_ci
9762306a36Sopenharmony_ci     - And then there's the Alpha.
9862306a36Sopenharmony_ci     - Virtual Machine Guests.
9962306a36Sopenharmony_ci
10062306a36Sopenharmony_ci (*) Example uses.
10162306a36Sopenharmony_ci
10262306a36Sopenharmony_ci     - Circular buffers.
10362306a36Sopenharmony_ci
10462306a36Sopenharmony_ci (*) References.
10562306a36Sopenharmony_ci
10662306a36Sopenharmony_ci
10762306a36Sopenharmony_ci============================
10862306a36Sopenharmony_ciABSTRACT MEMORY ACCESS MODEL
10962306a36Sopenharmony_ci============================
11062306a36Sopenharmony_ci
11162306a36Sopenharmony_ciConsider the following abstract model of the system:
11262306a36Sopenharmony_ci
11362306a36Sopenharmony_ci		            :                :
11462306a36Sopenharmony_ci		            :                :
11562306a36Sopenharmony_ci		            :                :
11662306a36Sopenharmony_ci		+-------+   :   +--------+   :   +-------+
11762306a36Sopenharmony_ci		|       |   :   |        |   :   |       |
11862306a36Sopenharmony_ci		|       |   :   |        |   :   |       |
11962306a36Sopenharmony_ci		| CPU 1 |<----->| Memory |<----->| CPU 2 |
12062306a36Sopenharmony_ci		|       |   :   |        |   :   |       |
12162306a36Sopenharmony_ci		|       |   :   |        |   :   |       |
12262306a36Sopenharmony_ci		+-------+   :   +--------+   :   +-------+
12362306a36Sopenharmony_ci		    ^       :       ^        :       ^
12462306a36Sopenharmony_ci		    |       :       |        :       |
12562306a36Sopenharmony_ci		    |       :       |        :       |
12662306a36Sopenharmony_ci		    |       :       v        :       |
12762306a36Sopenharmony_ci		    |       :   +--------+   :       |
12862306a36Sopenharmony_ci		    |       :   |        |   :       |
12962306a36Sopenharmony_ci		    |       :   |        |   :       |
13062306a36Sopenharmony_ci		    +---------->| Device |<----------+
13162306a36Sopenharmony_ci		            :   |        |   :
13262306a36Sopenharmony_ci		            :   |        |   :
13362306a36Sopenharmony_ci		            :   +--------+   :
13462306a36Sopenharmony_ci		            :                :
13562306a36Sopenharmony_ci
13662306a36Sopenharmony_ciEach CPU executes a program that generates memory access operations.  In the
13762306a36Sopenharmony_ciabstract CPU, memory operation ordering is very relaxed, and a CPU may actually
13862306a36Sopenharmony_ciperform the memory operations in any order it likes, provided program causality
13962306a36Sopenharmony_ciappears to be maintained.  Similarly, the compiler may also arrange the
14062306a36Sopenharmony_ciinstructions it emits in any order it likes, provided it doesn't affect the
14162306a36Sopenharmony_ciapparent operation of the program.
14262306a36Sopenharmony_ci
14362306a36Sopenharmony_ciSo in the above diagram, the effects of the memory operations performed by a
14462306a36Sopenharmony_ciCPU are perceived by the rest of the system as the operations cross the
14562306a36Sopenharmony_ciinterface between the CPU and rest of the system (the dotted lines).
14662306a36Sopenharmony_ci
14762306a36Sopenharmony_ci
14862306a36Sopenharmony_ciFor example, consider the following sequence of events:
14962306a36Sopenharmony_ci
15062306a36Sopenharmony_ci	CPU 1		CPU 2
15162306a36Sopenharmony_ci	===============	===============
15262306a36Sopenharmony_ci	{ A == 1; B == 2 }
15362306a36Sopenharmony_ci	A = 3;		x = B;
15462306a36Sopenharmony_ci	B = 4;		y = A;
15562306a36Sopenharmony_ci
15662306a36Sopenharmony_ciThe set of accesses as seen by the memory system in the middle can be arranged
15762306a36Sopenharmony_ciin 24 different combinations:
15862306a36Sopenharmony_ci
15962306a36Sopenharmony_ci	STORE A=3,	STORE B=4,	y=LOAD A->3,	x=LOAD B->4
16062306a36Sopenharmony_ci	STORE A=3,	STORE B=4,	x=LOAD B->4,	y=LOAD A->3
16162306a36Sopenharmony_ci	STORE A=3,	y=LOAD A->3,	STORE B=4,	x=LOAD B->4
16262306a36Sopenharmony_ci	STORE A=3,	y=LOAD A->3,	x=LOAD B->2,	STORE B=4
16362306a36Sopenharmony_ci	STORE A=3,	x=LOAD B->2,	STORE B=4,	y=LOAD A->3
16462306a36Sopenharmony_ci	STORE A=3,	x=LOAD B->2,	y=LOAD A->3,	STORE B=4
16562306a36Sopenharmony_ci	STORE B=4,	STORE A=3,	y=LOAD A->3,	x=LOAD B->4
16662306a36Sopenharmony_ci	STORE B=4, ...
16762306a36Sopenharmony_ci	...
16862306a36Sopenharmony_ci
16962306a36Sopenharmony_ciand can thus result in four different combinations of values:
17062306a36Sopenharmony_ci
17162306a36Sopenharmony_ci	x == 2, y == 1
17262306a36Sopenharmony_ci	x == 2, y == 3
17362306a36Sopenharmony_ci	x == 4, y == 1
17462306a36Sopenharmony_ci	x == 4, y == 3
17562306a36Sopenharmony_ci
17662306a36Sopenharmony_ci
17762306a36Sopenharmony_ciFurthermore, the stores committed by a CPU to the memory system may not be
17862306a36Sopenharmony_ciperceived by the loads made by another CPU in the same order as the stores were
17962306a36Sopenharmony_cicommitted.
18062306a36Sopenharmony_ci
18162306a36Sopenharmony_ci
18262306a36Sopenharmony_ciAs a further example, consider this sequence of events:
18362306a36Sopenharmony_ci
18462306a36Sopenharmony_ci	CPU 1		CPU 2
18562306a36Sopenharmony_ci	===============	===============
18662306a36Sopenharmony_ci	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
18762306a36Sopenharmony_ci	B = 4;		Q = P;
18862306a36Sopenharmony_ci	P = &B;		D = *Q;
18962306a36Sopenharmony_ci
19062306a36Sopenharmony_ciThere is an obvious address dependency here, as the value loaded into D depends
19162306a36Sopenharmony_cion the address retrieved from P by CPU 2.  At the end of the sequence, any of
19262306a36Sopenharmony_cithe following results are possible:
19362306a36Sopenharmony_ci
19462306a36Sopenharmony_ci	(Q == &A) and (D == 1)
19562306a36Sopenharmony_ci	(Q == &B) and (D == 2)
19662306a36Sopenharmony_ci	(Q == &B) and (D == 4)
19762306a36Sopenharmony_ci
19862306a36Sopenharmony_ciNote that CPU 2 will never try and load C into D because the CPU will load P
19962306a36Sopenharmony_ciinto Q before issuing the load of *Q.
20062306a36Sopenharmony_ci
20162306a36Sopenharmony_ci
20262306a36Sopenharmony_ciDEVICE OPERATIONS
20362306a36Sopenharmony_ci-----------------
20462306a36Sopenharmony_ci
20562306a36Sopenharmony_ciSome devices present their control interfaces as collections of memory
20662306a36Sopenharmony_cilocations, but the order in which the control registers are accessed is very
20762306a36Sopenharmony_ciimportant.  For instance, imagine an ethernet card with a set of internal
20862306a36Sopenharmony_ciregisters that are accessed through an address port register (A) and a data
20962306a36Sopenharmony_ciport register (D).  To read internal register 5, the following code might then
21062306a36Sopenharmony_cibe used:
21162306a36Sopenharmony_ci
21262306a36Sopenharmony_ci	*A = 5;
21362306a36Sopenharmony_ci	x = *D;
21462306a36Sopenharmony_ci
21562306a36Sopenharmony_cibut this might show up as either of the following two sequences:
21662306a36Sopenharmony_ci
21762306a36Sopenharmony_ci	STORE *A = 5, x = LOAD *D
21862306a36Sopenharmony_ci	x = LOAD *D, STORE *A = 5
21962306a36Sopenharmony_ci
22062306a36Sopenharmony_cithe second of which will almost certainly result in a malfunction, since it set
22162306a36Sopenharmony_cithe address _after_ attempting to read the register.
22262306a36Sopenharmony_ci
22362306a36Sopenharmony_ci
22462306a36Sopenharmony_ciGUARANTEES
22562306a36Sopenharmony_ci----------
22662306a36Sopenharmony_ci
22762306a36Sopenharmony_ciThere are some minimal guarantees that may be expected of a CPU:
22862306a36Sopenharmony_ci
22962306a36Sopenharmony_ci (*) On any given CPU, dependent memory accesses will be issued in order, with
23062306a36Sopenharmony_ci     respect to itself.  This means that for:
23162306a36Sopenharmony_ci
23262306a36Sopenharmony_ci	Q = READ_ONCE(P); D = READ_ONCE(*Q);
23362306a36Sopenharmony_ci
23462306a36Sopenharmony_ci     the CPU will issue the following memory operations:
23562306a36Sopenharmony_ci
23662306a36Sopenharmony_ci	Q = LOAD P, D = LOAD *Q
23762306a36Sopenharmony_ci
23862306a36Sopenharmony_ci     and always in that order.  However, on DEC Alpha, READ_ONCE() also
23962306a36Sopenharmony_ci     emits a memory-barrier instruction, so that a DEC Alpha CPU will
24062306a36Sopenharmony_ci     instead issue the following memory operations:
24162306a36Sopenharmony_ci
24262306a36Sopenharmony_ci	Q = LOAD P, MEMORY_BARRIER, D = LOAD *Q, MEMORY_BARRIER
24362306a36Sopenharmony_ci
24462306a36Sopenharmony_ci     Whether on DEC Alpha or not, the READ_ONCE() also prevents compiler
24562306a36Sopenharmony_ci     mischief.
24662306a36Sopenharmony_ci
24762306a36Sopenharmony_ci (*) Overlapping loads and stores within a particular CPU will appear to be
24862306a36Sopenharmony_ci     ordered within that CPU.  This means that for:
24962306a36Sopenharmony_ci
25062306a36Sopenharmony_ci	a = READ_ONCE(*X); WRITE_ONCE(*X, b);
25162306a36Sopenharmony_ci
25262306a36Sopenharmony_ci     the CPU will only issue the following sequence of memory operations:
25362306a36Sopenharmony_ci
25462306a36Sopenharmony_ci	a = LOAD *X, STORE *X = b
25562306a36Sopenharmony_ci
25662306a36Sopenharmony_ci     And for:
25762306a36Sopenharmony_ci
25862306a36Sopenharmony_ci	WRITE_ONCE(*X, c); d = READ_ONCE(*X);
25962306a36Sopenharmony_ci
26062306a36Sopenharmony_ci     the CPU will only issue:
26162306a36Sopenharmony_ci
26262306a36Sopenharmony_ci	STORE *X = c, d = LOAD *X
26362306a36Sopenharmony_ci
26462306a36Sopenharmony_ci     (Loads and stores overlap if they are targeted at overlapping pieces of
26562306a36Sopenharmony_ci     memory).
26662306a36Sopenharmony_ci
26762306a36Sopenharmony_ciAnd there are a number of things that _must_ or _must_not_ be assumed:
26862306a36Sopenharmony_ci
26962306a36Sopenharmony_ci (*) It _must_not_ be assumed that the compiler will do what you want
27062306a36Sopenharmony_ci     with memory references that are not protected by READ_ONCE() and
27162306a36Sopenharmony_ci     WRITE_ONCE().  Without them, the compiler is within its rights to
27262306a36Sopenharmony_ci     do all sorts of "creative" transformations, which are covered in
27362306a36Sopenharmony_ci     the COMPILER BARRIER section.
27462306a36Sopenharmony_ci
27562306a36Sopenharmony_ci (*) It _must_not_ be assumed that independent loads and stores will be issued
27662306a36Sopenharmony_ci     in the order given.  This means that for:
27762306a36Sopenharmony_ci
27862306a36Sopenharmony_ci	X = *A; Y = *B; *D = Z;
27962306a36Sopenharmony_ci
28062306a36Sopenharmony_ci     we may get any of the following sequences:
28162306a36Sopenharmony_ci
28262306a36Sopenharmony_ci	X = LOAD *A,  Y = LOAD *B,  STORE *D = Z
28362306a36Sopenharmony_ci	X = LOAD *A,  STORE *D = Z, Y = LOAD *B
28462306a36Sopenharmony_ci	Y = LOAD *B,  X = LOAD *A,  STORE *D = Z
28562306a36Sopenharmony_ci	Y = LOAD *B,  STORE *D = Z, X = LOAD *A
28662306a36Sopenharmony_ci	STORE *D = Z, X = LOAD *A,  Y = LOAD *B
28762306a36Sopenharmony_ci	STORE *D = Z, Y = LOAD *B,  X = LOAD *A
28862306a36Sopenharmony_ci
28962306a36Sopenharmony_ci (*) It _must_ be assumed that overlapping memory accesses may be merged or
29062306a36Sopenharmony_ci     discarded.  This means that for:
29162306a36Sopenharmony_ci
29262306a36Sopenharmony_ci	X = *A; Y = *(A + 4);
29362306a36Sopenharmony_ci
29462306a36Sopenharmony_ci     we may get any one of the following sequences:
29562306a36Sopenharmony_ci
29662306a36Sopenharmony_ci	X = LOAD *A; Y = LOAD *(A + 4);
29762306a36Sopenharmony_ci	Y = LOAD *(A + 4); X = LOAD *A;
29862306a36Sopenharmony_ci	{X, Y} = LOAD {*A, *(A + 4) };
29962306a36Sopenharmony_ci
30062306a36Sopenharmony_ci     And for:
30162306a36Sopenharmony_ci
30262306a36Sopenharmony_ci	*A = X; *(A + 4) = Y;
30362306a36Sopenharmony_ci
30462306a36Sopenharmony_ci     we may get any of:
30562306a36Sopenharmony_ci
30662306a36Sopenharmony_ci	STORE *A = X; STORE *(A + 4) = Y;
30762306a36Sopenharmony_ci	STORE *(A + 4) = Y; STORE *A = X;
30862306a36Sopenharmony_ci	STORE {*A, *(A + 4) } = {X, Y};
30962306a36Sopenharmony_ci
31062306a36Sopenharmony_ciAnd there are anti-guarantees:
31162306a36Sopenharmony_ci
31262306a36Sopenharmony_ci (*) These guarantees do not apply to bitfields, because compilers often
31362306a36Sopenharmony_ci     generate code to modify these using non-atomic read-modify-write
31462306a36Sopenharmony_ci     sequences.  Do not attempt to use bitfields to synchronize parallel
31562306a36Sopenharmony_ci     algorithms.
31662306a36Sopenharmony_ci
31762306a36Sopenharmony_ci (*) Even in cases where bitfields are protected by locks, all fields
31862306a36Sopenharmony_ci     in a given bitfield must be protected by one lock.  If two fields
31962306a36Sopenharmony_ci     in a given bitfield are protected by different locks, the compiler's
32062306a36Sopenharmony_ci     non-atomic read-modify-write sequences can cause an update to one
32162306a36Sopenharmony_ci     field to corrupt the value of an adjacent field.
32262306a36Sopenharmony_ci
32362306a36Sopenharmony_ci (*) These guarantees apply only to properly aligned and sized scalar
32462306a36Sopenharmony_ci     variables.  "Properly sized" currently means variables that are
32562306a36Sopenharmony_ci     the same size as "char", "short", "int" and "long".  "Properly
32662306a36Sopenharmony_ci     aligned" means the natural alignment, thus no constraints for
32762306a36Sopenharmony_ci     "char", two-byte alignment for "short", four-byte alignment for
32862306a36Sopenharmony_ci     "int", and either four-byte or eight-byte alignment for "long",
32962306a36Sopenharmony_ci     on 32-bit and 64-bit systems, respectively.  Note that these
33062306a36Sopenharmony_ci     guarantees were introduced into the C11 standard, so beware when
33162306a36Sopenharmony_ci     using older pre-C11 compilers (for example, gcc 4.6).  The portion
33262306a36Sopenharmony_ci     of the standard containing this guarantee is Section 3.14, which
33362306a36Sopenharmony_ci     defines "memory location" as follows:
33462306a36Sopenharmony_ci
33562306a36Sopenharmony_ci     	memory location
33662306a36Sopenharmony_ci		either an object of scalar type, or a maximal sequence
33762306a36Sopenharmony_ci		of adjacent bit-fields all having nonzero width
33862306a36Sopenharmony_ci
33962306a36Sopenharmony_ci		NOTE 1: Two threads of execution can update and access
34062306a36Sopenharmony_ci		separate memory locations without interfering with
34162306a36Sopenharmony_ci		each other.
34262306a36Sopenharmony_ci
34362306a36Sopenharmony_ci		NOTE 2: A bit-field and an adjacent non-bit-field member
34462306a36Sopenharmony_ci		are in separate memory locations. The same applies
34562306a36Sopenharmony_ci		to two bit-fields, if one is declared inside a nested
34662306a36Sopenharmony_ci		structure declaration and the other is not, or if the two
34762306a36Sopenharmony_ci		are separated by a zero-length bit-field declaration,
34862306a36Sopenharmony_ci		or if they are separated by a non-bit-field member
34962306a36Sopenharmony_ci		declaration. It is not safe to concurrently update two
35062306a36Sopenharmony_ci		bit-fields in the same structure if all members declared
35162306a36Sopenharmony_ci		between them are also bit-fields, no matter what the
35262306a36Sopenharmony_ci		sizes of those intervening bit-fields happen to be.
35362306a36Sopenharmony_ci
35462306a36Sopenharmony_ci
35562306a36Sopenharmony_ci=========================
35662306a36Sopenharmony_ciWHAT ARE MEMORY BARRIERS?
35762306a36Sopenharmony_ci=========================
35862306a36Sopenharmony_ci
35962306a36Sopenharmony_ciAs can be seen above, independent memory operations are effectively performed
36062306a36Sopenharmony_ciin random order, but this can be a problem for CPU-CPU interaction and for I/O.
36162306a36Sopenharmony_ciWhat is required is some way of intervening to instruct the compiler and the
36262306a36Sopenharmony_ciCPU to restrict the order.
36362306a36Sopenharmony_ci
36462306a36Sopenharmony_ciMemory barriers are such interventions.  They impose a perceived partial
36562306a36Sopenharmony_ciordering over the memory operations on either side of the barrier.
36662306a36Sopenharmony_ci
36762306a36Sopenharmony_ciSuch enforcement is important because the CPUs and other devices in a system
36862306a36Sopenharmony_cican use a variety of tricks to improve performance, including reordering,
36962306a36Sopenharmony_cideferral and combination of memory operations; speculative loads; speculative
37062306a36Sopenharmony_cibranch prediction and various types of caching.  Memory barriers are used to
37162306a36Sopenharmony_cioverride or suppress these tricks, allowing the code to sanely control the
37262306a36Sopenharmony_ciinteraction of multiple CPUs and/or devices.
37362306a36Sopenharmony_ci
37462306a36Sopenharmony_ci
37562306a36Sopenharmony_ciVARIETIES OF MEMORY BARRIER
37662306a36Sopenharmony_ci---------------------------
37762306a36Sopenharmony_ci
37862306a36Sopenharmony_ciMemory barriers come in four basic varieties:
37962306a36Sopenharmony_ci
38062306a36Sopenharmony_ci (1) Write (or store) memory barriers.
38162306a36Sopenharmony_ci
38262306a36Sopenharmony_ci     A write memory barrier gives a guarantee that all the STORE operations
38362306a36Sopenharmony_ci     specified before the barrier will appear to happen before all the STORE
38462306a36Sopenharmony_ci     operations specified after the barrier with respect to the other
38562306a36Sopenharmony_ci     components of the system.
38662306a36Sopenharmony_ci
38762306a36Sopenharmony_ci     A write barrier is a partial ordering on stores only; it is not required
38862306a36Sopenharmony_ci     to have any effect on loads.
38962306a36Sopenharmony_ci
39062306a36Sopenharmony_ci     A CPU can be viewed as committing a sequence of store operations to the
39162306a36Sopenharmony_ci     memory system as time progresses.  All stores _before_ a write barrier
39262306a36Sopenharmony_ci     will occur _before_ all the stores after the write barrier.
39362306a36Sopenharmony_ci
39462306a36Sopenharmony_ci     [!] Note that write barriers should normally be paired with read or
39562306a36Sopenharmony_ci     address-dependency barriers; see the "SMP barrier pairing" subsection.
39662306a36Sopenharmony_ci
39762306a36Sopenharmony_ci
39862306a36Sopenharmony_ci (2) Address-dependency barriers (historical).
39962306a36Sopenharmony_ci
40062306a36Sopenharmony_ci     An address-dependency barrier is a weaker form of read barrier.  In the
40162306a36Sopenharmony_ci     case where two loads are performed such that the second depends on the
40262306a36Sopenharmony_ci     result of the first (eg: the first load retrieves the address to which
40362306a36Sopenharmony_ci     the second load will be directed), an address-dependency barrier would
40462306a36Sopenharmony_ci     be required to make sure that the target of the second load is updated
40562306a36Sopenharmony_ci     after the address obtained by the first load is accessed.
40662306a36Sopenharmony_ci
40762306a36Sopenharmony_ci     An address-dependency barrier is a partial ordering on interdependent
40862306a36Sopenharmony_ci     loads only; it is not required to have any effect on stores, independent
40962306a36Sopenharmony_ci     loads or overlapping loads.
41062306a36Sopenharmony_ci
41162306a36Sopenharmony_ci     As mentioned in (1), the other CPUs in the system can be viewed as
41262306a36Sopenharmony_ci     committing sequences of stores to the memory system that the CPU being
41362306a36Sopenharmony_ci     considered can then perceive.  An address-dependency barrier issued by
41462306a36Sopenharmony_ci     the CPU under consideration guarantees that for any load preceding it,
41562306a36Sopenharmony_ci     if that load touches one of a sequence of stores from another CPU, then
41662306a36Sopenharmony_ci     by the time the barrier completes, the effects of all the stores prior to
41762306a36Sopenharmony_ci     that touched by the load will be perceptible to any loads issued after
41862306a36Sopenharmony_ci     the address-dependency barrier.
41962306a36Sopenharmony_ci
42062306a36Sopenharmony_ci     See the "Examples of memory barrier sequences" subsection for diagrams
42162306a36Sopenharmony_ci     showing the ordering constraints.
42262306a36Sopenharmony_ci
42362306a36Sopenharmony_ci     [!] Note that the first load really has to have an _address_ dependency and
42462306a36Sopenharmony_ci     not a control dependency.  If the address for the second load is dependent
42562306a36Sopenharmony_ci     on the first load, but the dependency is through a conditional rather than
42662306a36Sopenharmony_ci     actually loading the address itself, then it's a _control_ dependency and
42762306a36Sopenharmony_ci     a full read barrier or better is required.  See the "Control dependencies"
42862306a36Sopenharmony_ci     subsection for more information.
42962306a36Sopenharmony_ci
43062306a36Sopenharmony_ci     [!] Note that address-dependency barriers should normally be paired with
43162306a36Sopenharmony_ci     write barriers; see the "SMP barrier pairing" subsection.
43262306a36Sopenharmony_ci
43362306a36Sopenharmony_ci     [!] Kernel release v5.9 removed kernel APIs for explicit address-
43462306a36Sopenharmony_ci     dependency barriers.  Nowadays, APIs for marking loads from shared
43562306a36Sopenharmony_ci     variables such as READ_ONCE() and rcu_dereference() provide implicit
43662306a36Sopenharmony_ci     address-dependency barriers.
43762306a36Sopenharmony_ci
43862306a36Sopenharmony_ci (3) Read (or load) memory barriers.
43962306a36Sopenharmony_ci
44062306a36Sopenharmony_ci     A read barrier is an address-dependency barrier plus a guarantee that all
44162306a36Sopenharmony_ci     the LOAD operations specified before the barrier will appear to happen
44262306a36Sopenharmony_ci     before all the LOAD operations specified after the barrier with respect to
44362306a36Sopenharmony_ci     the other components of the system.
44462306a36Sopenharmony_ci
44562306a36Sopenharmony_ci     A read barrier is a partial ordering on loads only; it is not required to
44662306a36Sopenharmony_ci     have any effect on stores.
44762306a36Sopenharmony_ci
44862306a36Sopenharmony_ci     Read memory barriers imply address-dependency barriers, and so can
44962306a36Sopenharmony_ci     substitute for them.
45062306a36Sopenharmony_ci
45162306a36Sopenharmony_ci     [!] Note that read barriers should normally be paired with write barriers;
45262306a36Sopenharmony_ci     see the "SMP barrier pairing" subsection.
45362306a36Sopenharmony_ci
45462306a36Sopenharmony_ci
45562306a36Sopenharmony_ci (4) General memory barriers.
45662306a36Sopenharmony_ci
45762306a36Sopenharmony_ci     A general memory barrier gives a guarantee that all the LOAD and STORE
45862306a36Sopenharmony_ci     operations specified before the barrier will appear to happen before all
45962306a36Sopenharmony_ci     the LOAD and STORE operations specified after the barrier with respect to
46062306a36Sopenharmony_ci     the other components of the system.
46162306a36Sopenharmony_ci
46262306a36Sopenharmony_ci     A general memory barrier is a partial ordering over both loads and stores.
46362306a36Sopenharmony_ci
46462306a36Sopenharmony_ci     General memory barriers imply both read and write memory barriers, and so
46562306a36Sopenharmony_ci     can substitute for either.
46662306a36Sopenharmony_ci
46762306a36Sopenharmony_ci
46862306a36Sopenharmony_ciAnd a couple of implicit varieties:
46962306a36Sopenharmony_ci
47062306a36Sopenharmony_ci (5) ACQUIRE operations.
47162306a36Sopenharmony_ci
47262306a36Sopenharmony_ci     This acts as a one-way permeable barrier.  It guarantees that all memory
47362306a36Sopenharmony_ci     operations after the ACQUIRE operation will appear to happen after the
47462306a36Sopenharmony_ci     ACQUIRE operation with respect to the other components of the system.
47562306a36Sopenharmony_ci     ACQUIRE operations include LOCK operations and both smp_load_acquire()
47662306a36Sopenharmony_ci     and smp_cond_load_acquire() operations.
47762306a36Sopenharmony_ci
47862306a36Sopenharmony_ci     Memory operations that occur before an ACQUIRE operation may appear to
47962306a36Sopenharmony_ci     happen after it completes.
48062306a36Sopenharmony_ci
48162306a36Sopenharmony_ci     An ACQUIRE operation should almost always be paired with a RELEASE
48262306a36Sopenharmony_ci     operation.
48362306a36Sopenharmony_ci
48462306a36Sopenharmony_ci
48562306a36Sopenharmony_ci (6) RELEASE operations.
48662306a36Sopenharmony_ci
48762306a36Sopenharmony_ci     This also acts as a one-way permeable barrier.  It guarantees that all
48862306a36Sopenharmony_ci     memory operations before the RELEASE operation will appear to happen
48962306a36Sopenharmony_ci     before the RELEASE operation with respect to the other components of the
49062306a36Sopenharmony_ci     system. RELEASE operations include UNLOCK operations and
49162306a36Sopenharmony_ci     smp_store_release() operations.
49262306a36Sopenharmony_ci
49362306a36Sopenharmony_ci     Memory operations that occur after a RELEASE operation may appear to
49462306a36Sopenharmony_ci     happen before it completes.
49562306a36Sopenharmony_ci
49662306a36Sopenharmony_ci     The use of ACQUIRE and RELEASE operations generally precludes the need
49762306a36Sopenharmony_ci     for other sorts of memory barrier.  In addition, a RELEASE+ACQUIRE pair is
49862306a36Sopenharmony_ci     -not- guaranteed to act as a full memory barrier.  However, after an
49962306a36Sopenharmony_ci     ACQUIRE on a given variable, all memory accesses preceding any prior
50062306a36Sopenharmony_ci     RELEASE on that same variable are guaranteed to be visible.  In other
50162306a36Sopenharmony_ci     words, within a given variable's critical section, all accesses of all
50262306a36Sopenharmony_ci     previous critical sections for that variable are guaranteed to have
50362306a36Sopenharmony_ci     completed.
50462306a36Sopenharmony_ci
50562306a36Sopenharmony_ci     This means that ACQUIRE acts as a minimal "acquire" operation and
50662306a36Sopenharmony_ci     RELEASE acts as a minimal "release" operation.
50762306a36Sopenharmony_ci
50862306a36Sopenharmony_ciA subset of the atomic operations described in atomic_t.txt have ACQUIRE and
50962306a36Sopenharmony_ciRELEASE variants in addition to fully-ordered and relaxed (no barrier
51062306a36Sopenharmony_cisemantics) definitions.  For compound atomics performing both a load and a
51162306a36Sopenharmony_cistore, ACQUIRE semantics apply only to the load and RELEASE semantics apply
51262306a36Sopenharmony_cionly to the store portion of the operation.
51362306a36Sopenharmony_ci
51462306a36Sopenharmony_ciMemory barriers are only required where there's a possibility of interaction
51562306a36Sopenharmony_cibetween two CPUs or between a CPU and a device.  If it can be guaranteed that
51662306a36Sopenharmony_cithere won't be any such interaction in any particular piece of code, then
51762306a36Sopenharmony_cimemory barriers are unnecessary in that piece of code.
51862306a36Sopenharmony_ci
51962306a36Sopenharmony_ci
52062306a36Sopenharmony_ciNote that these are the _minimum_ guarantees.  Different architectures may give
52162306a36Sopenharmony_cimore substantial guarantees, but they may _not_ be relied upon outside of arch
52262306a36Sopenharmony_cispecific code.
52362306a36Sopenharmony_ci
52462306a36Sopenharmony_ci
52562306a36Sopenharmony_ciWHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
52662306a36Sopenharmony_ci----------------------------------------------
52762306a36Sopenharmony_ci
52862306a36Sopenharmony_ciThere are certain things that the Linux kernel memory barriers do not guarantee:
52962306a36Sopenharmony_ci
53062306a36Sopenharmony_ci (*) There is no guarantee that any of the memory accesses specified before a
53162306a36Sopenharmony_ci     memory barrier will be _complete_ by the completion of a memory barrier
53262306a36Sopenharmony_ci     instruction; the barrier can be considered to draw a line in that CPU's
53362306a36Sopenharmony_ci     access queue that accesses of the appropriate type may not cross.
53462306a36Sopenharmony_ci
53562306a36Sopenharmony_ci (*) There is no guarantee that issuing a memory barrier on one CPU will have
53662306a36Sopenharmony_ci     any direct effect on another CPU or any other hardware in the system.  The
53762306a36Sopenharmony_ci     indirect effect will be the order in which the second CPU sees the effects
53862306a36Sopenharmony_ci     of the first CPU's accesses occur, but see the next point:
53962306a36Sopenharmony_ci
54062306a36Sopenharmony_ci (*) There is no guarantee that a CPU will see the correct order of effects
54162306a36Sopenharmony_ci     from a second CPU's accesses, even _if_ the second CPU uses a memory
54262306a36Sopenharmony_ci     barrier, unless the first CPU _also_ uses a matching memory barrier (see
54362306a36Sopenharmony_ci     the subsection on "SMP Barrier Pairing").
54462306a36Sopenharmony_ci
54562306a36Sopenharmony_ci (*) There is no guarantee that some intervening piece of off-the-CPU
54662306a36Sopenharmony_ci     hardware[*] will not reorder the memory accesses.  CPU cache coherency
54762306a36Sopenharmony_ci     mechanisms should propagate the indirect effects of a memory barrier
54862306a36Sopenharmony_ci     between CPUs, but might not do so in order.
54962306a36Sopenharmony_ci
55062306a36Sopenharmony_ci	[*] For information on bus mastering DMA and coherency please read:
55162306a36Sopenharmony_ci
55262306a36Sopenharmony_ci	    Documentation/driver-api/pci/pci.rst
55362306a36Sopenharmony_ci	    Documentation/core-api/dma-api-howto.rst
55462306a36Sopenharmony_ci	    Documentation/core-api/dma-api.rst
55562306a36Sopenharmony_ci
55662306a36Sopenharmony_ci
55762306a36Sopenharmony_ciADDRESS-DEPENDENCY BARRIERS (HISTORICAL)
55862306a36Sopenharmony_ci----------------------------------------
55962306a36Sopenharmony_ci
56062306a36Sopenharmony_ciAs of v4.15 of the Linux kernel, an smp_mb() was added to READ_ONCE() for
56162306a36Sopenharmony_ciDEC Alpha, which means that about the only people who need to pay attention
56262306a36Sopenharmony_cito this section are those working on DEC Alpha architecture-specific code
56362306a36Sopenharmony_ciand those working on READ_ONCE() itself.  For those who need it, and for
56462306a36Sopenharmony_cithose who are interested in the history, here is the story of
56562306a36Sopenharmony_ciaddress-dependency barriers.
56662306a36Sopenharmony_ci
56762306a36Sopenharmony_ci[!] While address dependencies are observed in both load-to-load and
56862306a36Sopenharmony_ciload-to-store relations, address-dependency barriers are not necessary
56962306a36Sopenharmony_cifor load-to-store situations.
57062306a36Sopenharmony_ci
57162306a36Sopenharmony_ciThe requirement of address-dependency barriers is a little subtle, and
57262306a36Sopenharmony_ciit's not always obvious that they're needed.  To illustrate, consider the
57362306a36Sopenharmony_cifollowing sequence of events:
57462306a36Sopenharmony_ci
57562306a36Sopenharmony_ci	CPU 1		      CPU 2
57662306a36Sopenharmony_ci	===============	      ===============
57762306a36Sopenharmony_ci	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
57862306a36Sopenharmony_ci	B = 4;
57962306a36Sopenharmony_ci	<write barrier>
58062306a36Sopenharmony_ci	WRITE_ONCE(P, &B);
58162306a36Sopenharmony_ci			      Q = READ_ONCE_OLD(P);
58262306a36Sopenharmony_ci			      D = *Q;
58362306a36Sopenharmony_ci
58462306a36Sopenharmony_ci[!] READ_ONCE_OLD() corresponds to READ_ONCE() of pre-4.15 kernel, which
58562306a36Sopenharmony_cidoesn't imply an address-dependency barrier.
58662306a36Sopenharmony_ci
58762306a36Sopenharmony_ciThere's a clear address dependency here, and it would seem that by the end of
58862306a36Sopenharmony_cithe sequence, Q must be either &A or &B, and that:
58962306a36Sopenharmony_ci
59062306a36Sopenharmony_ci	(Q == &A) implies (D == 1)
59162306a36Sopenharmony_ci	(Q == &B) implies (D == 4)
59262306a36Sopenharmony_ci
59362306a36Sopenharmony_ciBut!  CPU 2's perception of P may be updated _before_ its perception of B, thus
59462306a36Sopenharmony_cileading to the following situation:
59562306a36Sopenharmony_ci
59662306a36Sopenharmony_ci	(Q == &B) and (D == 2) ????
59762306a36Sopenharmony_ci
59862306a36Sopenharmony_ciWhile this may seem like a failure of coherency or causality maintenance, it
59962306a36Sopenharmony_ciisn't, and this behaviour can be observed on certain real CPUs (such as the DEC
60062306a36Sopenharmony_ciAlpha).
60162306a36Sopenharmony_ci
60262306a36Sopenharmony_ciTo deal with this, READ_ONCE() provides an implicit address-dependency barrier
60362306a36Sopenharmony_cisince kernel release v4.15:
60462306a36Sopenharmony_ci
60562306a36Sopenharmony_ci	CPU 1		      CPU 2
60662306a36Sopenharmony_ci	===============	      ===============
60762306a36Sopenharmony_ci	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
60862306a36Sopenharmony_ci	B = 4;
60962306a36Sopenharmony_ci	<write barrier>
61062306a36Sopenharmony_ci	WRITE_ONCE(P, &B);
61162306a36Sopenharmony_ci			      Q = READ_ONCE(P);
61262306a36Sopenharmony_ci			      <implicit address-dependency barrier>
61362306a36Sopenharmony_ci			      D = *Q;
61462306a36Sopenharmony_ci
61562306a36Sopenharmony_ciThis enforces the occurrence of one of the two implications, and prevents the
61662306a36Sopenharmony_cithird possibility from arising.
61762306a36Sopenharmony_ci
61862306a36Sopenharmony_ci
61962306a36Sopenharmony_ci[!] Note that this extremely counterintuitive situation arises most easily on
62062306a36Sopenharmony_cimachines with split caches, so that, for example, one cache bank processes
62162306a36Sopenharmony_cieven-numbered cache lines and the other bank processes odd-numbered cache
62262306a36Sopenharmony_cilines.  The pointer P might be stored in an odd-numbered cache line, and the
62362306a36Sopenharmony_civariable B might be stored in an even-numbered cache line.  Then, if the
62462306a36Sopenharmony_cieven-numbered bank of the reading CPU's cache is extremely busy while the
62562306a36Sopenharmony_ciodd-numbered bank is idle, one can see the new value of the pointer P (&B),
62662306a36Sopenharmony_cibut the old value of the variable B (2).
62762306a36Sopenharmony_ci
62862306a36Sopenharmony_ci
62962306a36Sopenharmony_ciAn address-dependency barrier is not required to order dependent writes
63062306a36Sopenharmony_cibecause the CPUs that the Linux kernel supports don't do writes until they
63162306a36Sopenharmony_ciare certain (1) that the write will actually happen, (2) of the location of
63262306a36Sopenharmony_cithe write, and (3) of the value to be written.
63362306a36Sopenharmony_ciBut please carefully read the "CONTROL DEPENDENCIES" section and the
63462306a36Sopenharmony_ciDocumentation/RCU/rcu_dereference.rst file:  The compiler can and does break
63562306a36Sopenharmony_cidependencies in a great many highly creative ways.
63662306a36Sopenharmony_ci
63762306a36Sopenharmony_ci	CPU 1		      CPU 2
63862306a36Sopenharmony_ci	===============	      ===============
63962306a36Sopenharmony_ci	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
64062306a36Sopenharmony_ci	B = 4;
64162306a36Sopenharmony_ci	<write barrier>
64262306a36Sopenharmony_ci	WRITE_ONCE(P, &B);
64362306a36Sopenharmony_ci			      Q = READ_ONCE_OLD(P);
64462306a36Sopenharmony_ci			      WRITE_ONCE(*Q, 5);
64562306a36Sopenharmony_ci
64662306a36Sopenharmony_ciTherefore, no address-dependency barrier is required to order the read into
64762306a36Sopenharmony_ciQ with the store into *Q.  In other words, this outcome is prohibited,
64862306a36Sopenharmony_cieven without an implicit address-dependency barrier of modern READ_ONCE():
64962306a36Sopenharmony_ci
65062306a36Sopenharmony_ci	(Q == &B) && (B == 4)
65162306a36Sopenharmony_ci
65262306a36Sopenharmony_ciPlease note that this pattern should be rare.  After all, the whole point
65362306a36Sopenharmony_ciof dependency ordering is to -prevent- writes to the data structure, along
65462306a36Sopenharmony_ciwith the expensive cache misses associated with those writes.  This pattern
65562306a36Sopenharmony_cican be used to record rare error conditions and the like, and the CPUs'
65662306a36Sopenharmony_cinaturally occurring ordering prevents such records from being lost.
65762306a36Sopenharmony_ci
65862306a36Sopenharmony_ci
65962306a36Sopenharmony_ciNote well that the ordering provided by an address dependency is local to
66062306a36Sopenharmony_cithe CPU containing it.  See the section on "Multicopy atomicity" for
66162306a36Sopenharmony_cimore information.
66262306a36Sopenharmony_ci
66362306a36Sopenharmony_ci
66462306a36Sopenharmony_ciThe address-dependency barrier is very important to the RCU system,
66562306a36Sopenharmony_cifor example.  See rcu_assign_pointer() and rcu_dereference() in
66662306a36Sopenharmony_ciinclude/linux/rcupdate.h.  This permits the current target of an RCU'd
66762306a36Sopenharmony_cipointer to be replaced with a new modified target, without the replacement
66862306a36Sopenharmony_citarget appearing to be incompletely initialised.
66962306a36Sopenharmony_ci
67062306a36Sopenharmony_ciSee also the subsection on "Cache Coherency" for a more thorough example.
67162306a36Sopenharmony_ci
67262306a36Sopenharmony_ci
67362306a36Sopenharmony_ciCONTROL DEPENDENCIES
67462306a36Sopenharmony_ci--------------------
67562306a36Sopenharmony_ci
67662306a36Sopenharmony_ciControl dependencies can be a bit tricky because current compilers do
67762306a36Sopenharmony_cinot understand them.  The purpose of this section is to help you prevent
67862306a36Sopenharmony_cithe compiler's ignorance from breaking your code.
67962306a36Sopenharmony_ci
68062306a36Sopenharmony_ciA load-load control dependency requires a full read memory barrier, not
68162306a36Sopenharmony_cisimply an (implicit) address-dependency barrier to make it work correctly.
68262306a36Sopenharmony_ciConsider the following bit of code:
68362306a36Sopenharmony_ci
68462306a36Sopenharmony_ci	q = READ_ONCE(a);
68562306a36Sopenharmony_ci	<implicit address-dependency barrier>
68662306a36Sopenharmony_ci	if (q) {
68762306a36Sopenharmony_ci		/* BUG: No address dependency!!! */
68862306a36Sopenharmony_ci		p = READ_ONCE(b);
68962306a36Sopenharmony_ci	}
69062306a36Sopenharmony_ci
69162306a36Sopenharmony_ciThis will not have the desired effect because there is no actual address
69262306a36Sopenharmony_cidependency, but rather a control dependency that the CPU may short-circuit
69362306a36Sopenharmony_ciby attempting to predict the outcome in advance, so that other CPUs see
69462306a36Sopenharmony_cithe load from b as having happened before the load from a.  In such a case
69562306a36Sopenharmony_ciwhat's actually required is:
69662306a36Sopenharmony_ci
69762306a36Sopenharmony_ci	q = READ_ONCE(a);
69862306a36Sopenharmony_ci	if (q) {
69962306a36Sopenharmony_ci		<read barrier>
70062306a36Sopenharmony_ci		p = READ_ONCE(b);
70162306a36Sopenharmony_ci	}
70262306a36Sopenharmony_ci
70362306a36Sopenharmony_ciHowever, stores are not speculated.  This means that ordering -is- provided
70462306a36Sopenharmony_cifor load-store control dependencies, as in the following example:
70562306a36Sopenharmony_ci
70662306a36Sopenharmony_ci	q = READ_ONCE(a);
70762306a36Sopenharmony_ci	if (q) {
70862306a36Sopenharmony_ci		WRITE_ONCE(b, 1);
70962306a36Sopenharmony_ci	}
71062306a36Sopenharmony_ci
71162306a36Sopenharmony_ciControl dependencies pair normally with other types of barriers.
71262306a36Sopenharmony_ciThat said, please note that neither READ_ONCE() nor WRITE_ONCE()
71362306a36Sopenharmony_ciare optional! Without the READ_ONCE(), the compiler might combine the
71462306a36Sopenharmony_ciload from 'a' with other loads from 'a'.  Without the WRITE_ONCE(),
71562306a36Sopenharmony_cithe compiler might combine the store to 'b' with other stores to 'b'.
71662306a36Sopenharmony_ciEither can result in highly counterintuitive effects on ordering.
71762306a36Sopenharmony_ci
71862306a36Sopenharmony_ciWorse yet, if the compiler is able to prove (say) that the value of
71962306a36Sopenharmony_civariable 'a' is always non-zero, it would be well within its rights
72062306a36Sopenharmony_cito optimize the original example by eliminating the "if" statement
72162306a36Sopenharmony_cias follows:
72262306a36Sopenharmony_ci
72362306a36Sopenharmony_ci	q = a;
72462306a36Sopenharmony_ci	b = 1;  /* BUG: Compiler and CPU can both reorder!!! */
72562306a36Sopenharmony_ci
72662306a36Sopenharmony_ciSo don't leave out the READ_ONCE().
72762306a36Sopenharmony_ci
72862306a36Sopenharmony_ciIt is tempting to try to enforce ordering on identical stores on both
72962306a36Sopenharmony_cibranches of the "if" statement as follows:
73062306a36Sopenharmony_ci
73162306a36Sopenharmony_ci	q = READ_ONCE(a);
73262306a36Sopenharmony_ci	if (q) {
73362306a36Sopenharmony_ci		barrier();
73462306a36Sopenharmony_ci		WRITE_ONCE(b, 1);
73562306a36Sopenharmony_ci		do_something();
73662306a36Sopenharmony_ci	} else {
73762306a36Sopenharmony_ci		barrier();
73862306a36Sopenharmony_ci		WRITE_ONCE(b, 1);
73962306a36Sopenharmony_ci		do_something_else();
74062306a36Sopenharmony_ci	}
74162306a36Sopenharmony_ci
74262306a36Sopenharmony_ciUnfortunately, current compilers will transform this as follows at high
74362306a36Sopenharmony_cioptimization levels:
74462306a36Sopenharmony_ci
74562306a36Sopenharmony_ci	q = READ_ONCE(a);
74662306a36Sopenharmony_ci	barrier();
74762306a36Sopenharmony_ci	WRITE_ONCE(b, 1);  /* BUG: No ordering vs. load from a!!! */
74862306a36Sopenharmony_ci	if (q) {
74962306a36Sopenharmony_ci		/* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
75062306a36Sopenharmony_ci		do_something();
75162306a36Sopenharmony_ci	} else {
75262306a36Sopenharmony_ci		/* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
75362306a36Sopenharmony_ci		do_something_else();
75462306a36Sopenharmony_ci	}
75562306a36Sopenharmony_ci
75662306a36Sopenharmony_ciNow there is no conditional between the load from 'a' and the store to
75762306a36Sopenharmony_ci'b', which means that the CPU is within its rights to reorder them:
75862306a36Sopenharmony_ciThe conditional is absolutely required, and must be present in the
75962306a36Sopenharmony_ciassembly code even after all compiler optimizations have been applied.
76062306a36Sopenharmony_ciTherefore, if you need ordering in this example, you need explicit
76162306a36Sopenharmony_cimemory barriers, for example, smp_store_release():
76262306a36Sopenharmony_ci
76362306a36Sopenharmony_ci	q = READ_ONCE(a);
76462306a36Sopenharmony_ci	if (q) {
76562306a36Sopenharmony_ci		smp_store_release(&b, 1);
76662306a36Sopenharmony_ci		do_something();
76762306a36Sopenharmony_ci	} else {
76862306a36Sopenharmony_ci		smp_store_release(&b, 1);
76962306a36Sopenharmony_ci		do_something_else();
77062306a36Sopenharmony_ci	}
77162306a36Sopenharmony_ci
77262306a36Sopenharmony_ciIn contrast, without explicit memory barriers, two-legged-if control
77362306a36Sopenharmony_ciordering is guaranteed only when the stores differ, for example:
77462306a36Sopenharmony_ci
77562306a36Sopenharmony_ci	q = READ_ONCE(a);
77662306a36Sopenharmony_ci	if (q) {
77762306a36Sopenharmony_ci		WRITE_ONCE(b, 1);
77862306a36Sopenharmony_ci		do_something();
77962306a36Sopenharmony_ci	} else {
78062306a36Sopenharmony_ci		WRITE_ONCE(b, 2);
78162306a36Sopenharmony_ci		do_something_else();
78262306a36Sopenharmony_ci	}
78362306a36Sopenharmony_ci
78462306a36Sopenharmony_ciThe initial READ_ONCE() is still required to prevent the compiler from
78562306a36Sopenharmony_ciproving the value of 'a'.
78662306a36Sopenharmony_ci
78762306a36Sopenharmony_ciIn addition, you need to be careful what you do with the local variable 'q',
78862306a36Sopenharmony_ciotherwise the compiler might be able to guess the value and again remove
78962306a36Sopenharmony_cithe needed conditional.  For example:
79062306a36Sopenharmony_ci
79162306a36Sopenharmony_ci	q = READ_ONCE(a);
79262306a36Sopenharmony_ci	if (q % MAX) {
79362306a36Sopenharmony_ci		WRITE_ONCE(b, 1);
79462306a36Sopenharmony_ci		do_something();
79562306a36Sopenharmony_ci	} else {
79662306a36Sopenharmony_ci		WRITE_ONCE(b, 2);
79762306a36Sopenharmony_ci		do_something_else();
79862306a36Sopenharmony_ci	}
79962306a36Sopenharmony_ci
80062306a36Sopenharmony_ciIf MAX is defined to be 1, then the compiler knows that (q % MAX) is
80162306a36Sopenharmony_ciequal to zero, in which case the compiler is within its rights to
80262306a36Sopenharmony_citransform the above code into the following:
80362306a36Sopenharmony_ci
80462306a36Sopenharmony_ci	q = READ_ONCE(a);
80562306a36Sopenharmony_ci	WRITE_ONCE(b, 2);
80662306a36Sopenharmony_ci	do_something_else();
80762306a36Sopenharmony_ci
80862306a36Sopenharmony_ciGiven this transformation, the CPU is not required to respect the ordering
80962306a36Sopenharmony_cibetween the load from variable 'a' and the store to variable 'b'.  It is
81062306a36Sopenharmony_citempting to add a barrier(), but this does not help.  The conditional
81162306a36Sopenharmony_ciis gone, and the barrier won't bring it back.  Therefore, if you are
81262306a36Sopenharmony_cirelying on this ordering, you should make sure that MAX is greater than
81362306a36Sopenharmony_cione, perhaps as follows:
81462306a36Sopenharmony_ci
81562306a36Sopenharmony_ci	q = READ_ONCE(a);
81662306a36Sopenharmony_ci	BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
81762306a36Sopenharmony_ci	if (q % MAX) {
81862306a36Sopenharmony_ci		WRITE_ONCE(b, 1);
81962306a36Sopenharmony_ci		do_something();
82062306a36Sopenharmony_ci	} else {
82162306a36Sopenharmony_ci		WRITE_ONCE(b, 2);
82262306a36Sopenharmony_ci		do_something_else();
82362306a36Sopenharmony_ci	}
82462306a36Sopenharmony_ci
82562306a36Sopenharmony_ciPlease note once again that the stores to 'b' differ.  If they were
82662306a36Sopenharmony_ciidentical, as noted earlier, the compiler could pull this store outside
82762306a36Sopenharmony_ciof the 'if' statement.
82862306a36Sopenharmony_ci
82962306a36Sopenharmony_ciYou must also be careful not to rely too much on boolean short-circuit
83062306a36Sopenharmony_cievaluation.  Consider this example:
83162306a36Sopenharmony_ci
83262306a36Sopenharmony_ci	q = READ_ONCE(a);
83362306a36Sopenharmony_ci	if (q || 1 > 0)
83462306a36Sopenharmony_ci		WRITE_ONCE(b, 1);
83562306a36Sopenharmony_ci
83662306a36Sopenharmony_ciBecause the first condition cannot fault and the second condition is
83762306a36Sopenharmony_cialways true, the compiler can transform this example as following,
83862306a36Sopenharmony_cidefeating control dependency:
83962306a36Sopenharmony_ci
84062306a36Sopenharmony_ci	q = READ_ONCE(a);
84162306a36Sopenharmony_ci	WRITE_ONCE(b, 1);
84262306a36Sopenharmony_ci
84362306a36Sopenharmony_ciThis example underscores the need to ensure that the compiler cannot
84462306a36Sopenharmony_ciout-guess your code.  More generally, although READ_ONCE() does force
84562306a36Sopenharmony_cithe compiler to actually emit code for a given load, it does not force
84662306a36Sopenharmony_cithe compiler to use the results.
84762306a36Sopenharmony_ci
84862306a36Sopenharmony_ciIn addition, control dependencies apply only to the then-clause and
84962306a36Sopenharmony_cielse-clause of the if-statement in question.  In particular, it does
85062306a36Sopenharmony_cinot necessarily apply to code following the if-statement:
85162306a36Sopenharmony_ci
85262306a36Sopenharmony_ci	q = READ_ONCE(a);
85362306a36Sopenharmony_ci	if (q) {
85462306a36Sopenharmony_ci		WRITE_ONCE(b, 1);
85562306a36Sopenharmony_ci	} else {
85662306a36Sopenharmony_ci		WRITE_ONCE(b, 2);
85762306a36Sopenharmony_ci	}
85862306a36Sopenharmony_ci	WRITE_ONCE(c, 1);  /* BUG: No ordering against the read from 'a'. */
85962306a36Sopenharmony_ci
86062306a36Sopenharmony_ciIt is tempting to argue that there in fact is ordering because the
86162306a36Sopenharmony_cicompiler cannot reorder volatile accesses and also cannot reorder
86262306a36Sopenharmony_cithe writes to 'b' with the condition.  Unfortunately for this line
86362306a36Sopenharmony_ciof reasoning, the compiler might compile the two writes to 'b' as
86462306a36Sopenharmony_ciconditional-move instructions, as in this fanciful pseudo-assembly
86562306a36Sopenharmony_cilanguage:
86662306a36Sopenharmony_ci
86762306a36Sopenharmony_ci	ld r1,a
86862306a36Sopenharmony_ci	cmp r1,$0
86962306a36Sopenharmony_ci	cmov,ne r4,$1
87062306a36Sopenharmony_ci	cmov,eq r4,$2
87162306a36Sopenharmony_ci	st r4,b
87262306a36Sopenharmony_ci	st $1,c
87362306a36Sopenharmony_ci
87462306a36Sopenharmony_ciA weakly ordered CPU would have no dependency of any sort between the load
87562306a36Sopenharmony_cifrom 'a' and the store to 'c'.  The control dependencies would extend
87662306a36Sopenharmony_cionly to the pair of cmov instructions and the store depending on them.
87762306a36Sopenharmony_ciIn short, control dependencies apply only to the stores in the then-clause
87862306a36Sopenharmony_ciand else-clause of the if-statement in question (including functions
87962306a36Sopenharmony_ciinvoked by those two clauses), not to code following that if-statement.
88062306a36Sopenharmony_ci
88162306a36Sopenharmony_ci
88262306a36Sopenharmony_ciNote well that the ordering provided by a control dependency is local
88362306a36Sopenharmony_cito the CPU containing it.  See the section on "Multicopy atomicity"
88462306a36Sopenharmony_cifor more information.
88562306a36Sopenharmony_ci
88662306a36Sopenharmony_ci
88762306a36Sopenharmony_ciIn summary:
88862306a36Sopenharmony_ci
88962306a36Sopenharmony_ci  (*) Control dependencies can order prior loads against later stores.
89062306a36Sopenharmony_ci      However, they do -not- guarantee any other sort of ordering:
89162306a36Sopenharmony_ci      Not prior loads against later loads, nor prior stores against
89262306a36Sopenharmony_ci      later anything.  If you need these other forms of ordering,
89362306a36Sopenharmony_ci      use smp_rmb(), smp_wmb(), or, in the case of prior stores and
89462306a36Sopenharmony_ci      later loads, smp_mb().
89562306a36Sopenharmony_ci
89662306a36Sopenharmony_ci  (*) If both legs of the "if" statement begin with identical stores to
89762306a36Sopenharmony_ci      the same variable, then those stores must be ordered, either by
89862306a36Sopenharmony_ci      preceding both of them with smp_mb() or by using smp_store_release()
89962306a36Sopenharmony_ci      to carry out the stores.  Please note that it is -not- sufficient
90062306a36Sopenharmony_ci      to use barrier() at beginning of each leg of the "if" statement
90162306a36Sopenharmony_ci      because, as shown by the example above, optimizing compilers can
90262306a36Sopenharmony_ci      destroy the control dependency while respecting the letter of the
90362306a36Sopenharmony_ci      barrier() law.
90462306a36Sopenharmony_ci
90562306a36Sopenharmony_ci  (*) Control dependencies require at least one run-time conditional
90662306a36Sopenharmony_ci      between the prior load and the subsequent store, and this
90762306a36Sopenharmony_ci      conditional must involve the prior load.  If the compiler is able
90862306a36Sopenharmony_ci      to optimize the conditional away, it will have also optimized
90962306a36Sopenharmony_ci      away the ordering.  Careful use of READ_ONCE() and WRITE_ONCE()
91062306a36Sopenharmony_ci      can help to preserve the needed conditional.
91162306a36Sopenharmony_ci
91262306a36Sopenharmony_ci  (*) Control dependencies require that the compiler avoid reordering the
91362306a36Sopenharmony_ci      dependency into nonexistence.  Careful use of READ_ONCE() or
91462306a36Sopenharmony_ci      atomic{,64}_read() can help to preserve your control dependency.
91562306a36Sopenharmony_ci      Please see the COMPILER BARRIER section for more information.
91662306a36Sopenharmony_ci
91762306a36Sopenharmony_ci  (*) Control dependencies apply only to the then-clause and else-clause
91862306a36Sopenharmony_ci      of the if-statement containing the control dependency, including
91962306a36Sopenharmony_ci      any functions that these two clauses call.  Control dependencies
92062306a36Sopenharmony_ci      do -not- apply to code following the if-statement containing the
92162306a36Sopenharmony_ci      control dependency.
92262306a36Sopenharmony_ci
92362306a36Sopenharmony_ci  (*) Control dependencies pair normally with other types of barriers.
92462306a36Sopenharmony_ci
92562306a36Sopenharmony_ci  (*) Control dependencies do -not- provide multicopy atomicity.  If you
92662306a36Sopenharmony_ci      need all the CPUs to see a given store at the same time, use smp_mb().
92762306a36Sopenharmony_ci
92862306a36Sopenharmony_ci  (*) Compilers do not understand control dependencies.  It is therefore
92962306a36Sopenharmony_ci      your job to ensure that they do not break your code.
93062306a36Sopenharmony_ci
93162306a36Sopenharmony_ci
93262306a36Sopenharmony_ciSMP BARRIER PAIRING
93362306a36Sopenharmony_ci-------------------
93462306a36Sopenharmony_ci
93562306a36Sopenharmony_ciWhen dealing with CPU-CPU interactions, certain types of memory barrier should
93662306a36Sopenharmony_cialways be paired.  A lack of appropriate pairing is almost certainly an error.
93762306a36Sopenharmony_ci
93862306a36Sopenharmony_ciGeneral barriers pair with each other, though they also pair with most
93962306a36Sopenharmony_ciother types of barriers, albeit without multicopy atomicity.  An acquire
94062306a36Sopenharmony_cibarrier pairs with a release barrier, but both may also pair with other
94162306a36Sopenharmony_cibarriers, including of course general barriers.  A write barrier pairs
94262306a36Sopenharmony_ciwith an address-dependency barrier, a control dependency, an acquire barrier,
94362306a36Sopenharmony_cia release barrier, a read barrier, or a general barrier.  Similarly a
94462306a36Sopenharmony_ciread barrier, control dependency, or an address-dependency barrier pairs
94562306a36Sopenharmony_ciwith a write barrier, an acquire barrier, a release barrier, or a
94662306a36Sopenharmony_cigeneral barrier:
94762306a36Sopenharmony_ci
94862306a36Sopenharmony_ci	CPU 1		      CPU 2
94962306a36Sopenharmony_ci	===============	      ===============
95062306a36Sopenharmony_ci	WRITE_ONCE(a, 1);
95162306a36Sopenharmony_ci	<write barrier>
95262306a36Sopenharmony_ci	WRITE_ONCE(b, 2);     x = READ_ONCE(b);
95362306a36Sopenharmony_ci			      <read barrier>
95462306a36Sopenharmony_ci			      y = READ_ONCE(a);
95562306a36Sopenharmony_ci
95662306a36Sopenharmony_ciOr:
95762306a36Sopenharmony_ci
95862306a36Sopenharmony_ci	CPU 1		      CPU 2
95962306a36Sopenharmony_ci	===============	      ===============================
96062306a36Sopenharmony_ci	a = 1;
96162306a36Sopenharmony_ci	<write barrier>
96262306a36Sopenharmony_ci	WRITE_ONCE(b, &a);    x = READ_ONCE(b);
96362306a36Sopenharmony_ci			      <implicit address-dependency barrier>
96462306a36Sopenharmony_ci			      y = *x;
96562306a36Sopenharmony_ci
96662306a36Sopenharmony_ciOr even:
96762306a36Sopenharmony_ci
96862306a36Sopenharmony_ci	CPU 1		      CPU 2
96962306a36Sopenharmony_ci	===============	      ===============================
97062306a36Sopenharmony_ci	r1 = READ_ONCE(y);
97162306a36Sopenharmony_ci	<general barrier>
97262306a36Sopenharmony_ci	WRITE_ONCE(x, 1);     if (r2 = READ_ONCE(x)) {
97362306a36Sopenharmony_ci			         <implicit control dependency>
97462306a36Sopenharmony_ci			         WRITE_ONCE(y, 1);
97562306a36Sopenharmony_ci			      }
97662306a36Sopenharmony_ci
97762306a36Sopenharmony_ci	assert(r1 == 0 || r2 == 0);
97862306a36Sopenharmony_ci
97962306a36Sopenharmony_ciBasically, the read barrier always has to be there, even though it can be of
98062306a36Sopenharmony_cithe "weaker" type.
98162306a36Sopenharmony_ci
98262306a36Sopenharmony_ci[!] Note that the stores before the write barrier would normally be expected to
98362306a36Sopenharmony_cimatch the loads after the read barrier or the address-dependency barrier, and
98462306a36Sopenharmony_civice versa:
98562306a36Sopenharmony_ci
98662306a36Sopenharmony_ci	CPU 1                               CPU 2
98762306a36Sopenharmony_ci	===================                 ===================
98862306a36Sopenharmony_ci	WRITE_ONCE(a, 1);    }----   --->{  v = READ_ONCE(c);
98962306a36Sopenharmony_ci	WRITE_ONCE(b, 2);    }    \ /    {  w = READ_ONCE(d);
99062306a36Sopenharmony_ci	<write barrier>            \        <read barrier>
99162306a36Sopenharmony_ci	WRITE_ONCE(c, 3);    }    / \    {  x = READ_ONCE(a);
99262306a36Sopenharmony_ci	WRITE_ONCE(d, 4);    }----   --->{  y = READ_ONCE(b);
99362306a36Sopenharmony_ci
99462306a36Sopenharmony_ci
99562306a36Sopenharmony_ciEXAMPLES OF MEMORY BARRIER SEQUENCES
99662306a36Sopenharmony_ci------------------------------------
99762306a36Sopenharmony_ci
99862306a36Sopenharmony_ciFirstly, write barriers act as partial orderings on store operations.
99962306a36Sopenharmony_ciConsider the following sequence of events:
100062306a36Sopenharmony_ci
100162306a36Sopenharmony_ci	CPU 1
100262306a36Sopenharmony_ci	=======================
100362306a36Sopenharmony_ci	STORE A = 1
100462306a36Sopenharmony_ci	STORE B = 2
100562306a36Sopenharmony_ci	STORE C = 3
100662306a36Sopenharmony_ci	<write barrier>
100762306a36Sopenharmony_ci	STORE D = 4
100862306a36Sopenharmony_ci	STORE E = 5
100962306a36Sopenharmony_ci
101062306a36Sopenharmony_ciThis sequence of events is committed to the memory coherence system in an order
101162306a36Sopenharmony_cithat the rest of the system might perceive as the unordered set of { STORE A,
101262306a36Sopenharmony_ciSTORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
101362306a36Sopenharmony_ci}:
101462306a36Sopenharmony_ci
101562306a36Sopenharmony_ci	+-------+       :      :
101662306a36Sopenharmony_ci	|       |       +------+
101762306a36Sopenharmony_ci	|       |------>| C=3  |     }     /\
101862306a36Sopenharmony_ci	|       |  :    +------+     }-----  \  -----> Events perceptible to
101962306a36Sopenharmony_ci	|       |  :    | A=1  |     }        \/       the rest of the system
102062306a36Sopenharmony_ci	|       |  :    +------+     }
102162306a36Sopenharmony_ci	| CPU 1 |  :    | B=2  |     }
102262306a36Sopenharmony_ci	|       |       +------+     }
102362306a36Sopenharmony_ci	|       |   wwwwwwwwwwwwwwww }   <--- At this point the write barrier
102462306a36Sopenharmony_ci	|       |       +------+     }        requires all stores prior to the
102562306a36Sopenharmony_ci	|       |  :    | E=5  |     }        barrier to be committed before
102662306a36Sopenharmony_ci	|       |  :    +------+     }        further stores may take place
102762306a36Sopenharmony_ci	|       |------>| D=4  |     }
102862306a36Sopenharmony_ci	|       |       +------+
102962306a36Sopenharmony_ci	+-------+       :      :
103062306a36Sopenharmony_ci	                   |
103162306a36Sopenharmony_ci	                   | Sequence in which stores are committed to the
103262306a36Sopenharmony_ci	                   | memory system by CPU 1
103362306a36Sopenharmony_ci	                   V
103462306a36Sopenharmony_ci
103562306a36Sopenharmony_ci
103662306a36Sopenharmony_ciSecondly, address-dependency barriers act as partial orderings on address-
103762306a36Sopenharmony_cidependent loads.  Consider the following sequence of events:
103862306a36Sopenharmony_ci
103962306a36Sopenharmony_ci	CPU 1			CPU 2
104062306a36Sopenharmony_ci	=======================	=======================
104162306a36Sopenharmony_ci		{ B = 7; X = 9; Y = 8; C = &Y }
104262306a36Sopenharmony_ci	STORE A = 1
104362306a36Sopenharmony_ci	STORE B = 2
104462306a36Sopenharmony_ci	<write barrier>
104562306a36Sopenharmony_ci	STORE C = &B		LOAD X
104662306a36Sopenharmony_ci	STORE D = 4		LOAD C (gets &B)
104762306a36Sopenharmony_ci				LOAD *C (reads B)
104862306a36Sopenharmony_ci
104962306a36Sopenharmony_ciWithout intervention, CPU 2 may perceive the events on CPU 1 in some
105062306a36Sopenharmony_cieffectively random order, despite the write barrier issued by CPU 1:
105162306a36Sopenharmony_ci
105262306a36Sopenharmony_ci	+-------+       :      :                :       :
105362306a36Sopenharmony_ci	|       |       +------+                +-------+  | Sequence of update
105462306a36Sopenharmony_ci	|       |------>| B=2  |-----       --->| Y->8  |  | of perception on
105562306a36Sopenharmony_ci	|       |  :    +------+     \          +-------+  | CPU 2
105662306a36Sopenharmony_ci	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |  V
105762306a36Sopenharmony_ci	|       |       +------+       |        +-------+
105862306a36Sopenharmony_ci	|       |   wwwwwwwwwwwwwwww   |        :       :
105962306a36Sopenharmony_ci	|       |       +------+       |        :       :
106062306a36Sopenharmony_ci	|       |  :    | C=&B |---    |        :       :       +-------+
106162306a36Sopenharmony_ci	|       |  :    +------+   \   |        +-------+       |       |
106262306a36Sopenharmony_ci	|       |------>| D=4  |    ----------->| C->&B |------>|       |
106362306a36Sopenharmony_ci	|       |       +------+       |        +-------+       |       |
106462306a36Sopenharmony_ci	+-------+       :      :       |        :       :       |       |
106562306a36Sopenharmony_ci	                               |        :       :       |       |
106662306a36Sopenharmony_ci	                               |        :       :       | CPU 2 |
106762306a36Sopenharmony_ci	                               |        +-------+       |       |
106862306a36Sopenharmony_ci	    Apparently incorrect --->  |        | B->7  |------>|       |
106962306a36Sopenharmony_ci	    perception of B (!)        |        +-------+       |       |
107062306a36Sopenharmony_ci	                               |        :       :       |       |
107162306a36Sopenharmony_ci	                               |        +-------+       |       |
107262306a36Sopenharmony_ci	    The load of X holds --->    \       | X->9  |------>|       |
107362306a36Sopenharmony_ci	    up the maintenance           \      +-------+       |       |
107462306a36Sopenharmony_ci	    of coherence of B             ----->| B->2  |       +-------+
107562306a36Sopenharmony_ci	                                        +-------+
107662306a36Sopenharmony_ci	                                        :       :
107762306a36Sopenharmony_ci
107862306a36Sopenharmony_ci
107962306a36Sopenharmony_ciIn the above example, CPU 2 perceives that B is 7, despite the load of *C
108062306a36Sopenharmony_ci(which would be B) coming after the LOAD of C.
108162306a36Sopenharmony_ci
108262306a36Sopenharmony_ciIf, however, an address-dependency barrier were to be placed between the load
108362306a36Sopenharmony_ciof C and the load of *C (ie: B) on CPU 2:
108462306a36Sopenharmony_ci
108562306a36Sopenharmony_ci	CPU 1			CPU 2
108662306a36Sopenharmony_ci	=======================	=======================
108762306a36Sopenharmony_ci		{ B = 7; X = 9; Y = 8; C = &Y }
108862306a36Sopenharmony_ci	STORE A = 1
108962306a36Sopenharmony_ci	STORE B = 2
109062306a36Sopenharmony_ci	<write barrier>
109162306a36Sopenharmony_ci	STORE C = &B		LOAD X
109262306a36Sopenharmony_ci	STORE D = 4		LOAD C (gets &B)
109362306a36Sopenharmony_ci				<address-dependency barrier>
109462306a36Sopenharmony_ci				LOAD *C (reads B)
109562306a36Sopenharmony_ci
109662306a36Sopenharmony_cithen the following will occur:
109762306a36Sopenharmony_ci
109862306a36Sopenharmony_ci	+-------+       :      :                :       :
109962306a36Sopenharmony_ci	|       |       +------+                +-------+
110062306a36Sopenharmony_ci	|       |------>| B=2  |-----       --->| Y->8  |
110162306a36Sopenharmony_ci	|       |  :    +------+     \          +-------+
110262306a36Sopenharmony_ci	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |
110362306a36Sopenharmony_ci	|       |       +------+       |        +-------+
110462306a36Sopenharmony_ci	|       |   wwwwwwwwwwwwwwww   |        :       :
110562306a36Sopenharmony_ci	|       |       +------+       |        :       :
110662306a36Sopenharmony_ci	|       |  :    | C=&B |---    |        :       :       +-------+
110762306a36Sopenharmony_ci	|       |  :    +------+   \   |        +-------+       |       |
110862306a36Sopenharmony_ci	|       |------>| D=4  |    ----------->| C->&B |------>|       |
110962306a36Sopenharmony_ci	|       |       +------+       |        +-------+       |       |
111062306a36Sopenharmony_ci	+-------+       :      :       |        :       :       |       |
111162306a36Sopenharmony_ci	                               |        :       :       |       |
111262306a36Sopenharmony_ci	                               |        :       :       | CPU 2 |
111362306a36Sopenharmony_ci	                               |        +-------+       |       |
111462306a36Sopenharmony_ci	                               |        | X->9  |------>|       |
111562306a36Sopenharmony_ci	                               |        +-------+       |       |
111662306a36Sopenharmony_ci	  Makes sure all effects --->   \   aaaaaaaaaaaaaaaaa   |       |
111762306a36Sopenharmony_ci	  prior to the store of C        \      +-------+       |       |
111862306a36Sopenharmony_ci	  are perceptible to              ----->| B->2  |------>|       |
111962306a36Sopenharmony_ci	  subsequent loads                      +-------+       |       |
112062306a36Sopenharmony_ci	                                        :       :       +-------+
112162306a36Sopenharmony_ci
112262306a36Sopenharmony_ci
112362306a36Sopenharmony_ciAnd thirdly, a read barrier acts as a partial order on loads.  Consider the
112462306a36Sopenharmony_cifollowing sequence of events:
112562306a36Sopenharmony_ci
112662306a36Sopenharmony_ci	CPU 1			CPU 2
112762306a36Sopenharmony_ci	=======================	=======================
112862306a36Sopenharmony_ci		{ A = 0, B = 9 }
112962306a36Sopenharmony_ci	STORE A=1
113062306a36Sopenharmony_ci	<write barrier>
113162306a36Sopenharmony_ci	STORE B=2
113262306a36Sopenharmony_ci				LOAD B
113362306a36Sopenharmony_ci				LOAD A
113462306a36Sopenharmony_ci
113562306a36Sopenharmony_ciWithout intervention, CPU 2 may then choose to perceive the events on CPU 1 in
113662306a36Sopenharmony_cisome effectively random order, despite the write barrier issued by CPU 1:
113762306a36Sopenharmony_ci
113862306a36Sopenharmony_ci	+-------+       :      :                :       :
113962306a36Sopenharmony_ci	|       |       +------+                +-------+
114062306a36Sopenharmony_ci	|       |------>| A=1  |------      --->| A->0  |
114162306a36Sopenharmony_ci	|       |       +------+      \         +-------+
114262306a36Sopenharmony_ci	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
114362306a36Sopenharmony_ci	|       |       +------+        |       +-------+
114462306a36Sopenharmony_ci	|       |------>| B=2  |---     |       :       :
114562306a36Sopenharmony_ci	|       |       +------+   \    |       :       :       +-------+
114662306a36Sopenharmony_ci	+-------+       :      :    \   |       +-------+       |       |
114762306a36Sopenharmony_ci	                             ---------->| B->2  |------>|       |
114862306a36Sopenharmony_ci	                                |       +-------+       | CPU 2 |
114962306a36Sopenharmony_ci	                                |       | A->0  |------>|       |
115062306a36Sopenharmony_ci	                                |       +-------+       |       |
115162306a36Sopenharmony_ci	                                |       :       :       +-------+
115262306a36Sopenharmony_ci	                                 \      :       :
115362306a36Sopenharmony_ci	                                  \     +-------+
115462306a36Sopenharmony_ci	                                   ---->| A->1  |
115562306a36Sopenharmony_ci	                                        +-------+
115662306a36Sopenharmony_ci	                                        :       :
115762306a36Sopenharmony_ci
115862306a36Sopenharmony_ci
115962306a36Sopenharmony_ciIf, however, a read barrier were to be placed between the load of B and the
116062306a36Sopenharmony_ciload of A on CPU 2:
116162306a36Sopenharmony_ci
116262306a36Sopenharmony_ci	CPU 1			CPU 2
116362306a36Sopenharmony_ci	=======================	=======================
116462306a36Sopenharmony_ci		{ A = 0, B = 9 }
116562306a36Sopenharmony_ci	STORE A=1
116662306a36Sopenharmony_ci	<write barrier>
116762306a36Sopenharmony_ci	STORE B=2
116862306a36Sopenharmony_ci				LOAD B
116962306a36Sopenharmony_ci				<read barrier>
117062306a36Sopenharmony_ci				LOAD A
117162306a36Sopenharmony_ci
117262306a36Sopenharmony_cithen the partial ordering imposed by CPU 1 will be perceived correctly by CPU
117362306a36Sopenharmony_ci2:
117462306a36Sopenharmony_ci
117562306a36Sopenharmony_ci	+-------+       :      :                :       :
117662306a36Sopenharmony_ci	|       |       +------+                +-------+
117762306a36Sopenharmony_ci	|       |------>| A=1  |------      --->| A->0  |
117862306a36Sopenharmony_ci	|       |       +------+      \         +-------+
117962306a36Sopenharmony_ci	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
118062306a36Sopenharmony_ci	|       |       +------+        |       +-------+
118162306a36Sopenharmony_ci	|       |------>| B=2  |---     |       :       :
118262306a36Sopenharmony_ci	|       |       +------+   \    |       :       :       +-------+
118362306a36Sopenharmony_ci	+-------+       :      :    \   |       +-------+       |       |
118462306a36Sopenharmony_ci	                             ---------->| B->2  |------>|       |
118562306a36Sopenharmony_ci	                                |       +-------+       | CPU 2 |
118662306a36Sopenharmony_ci	                                |       :       :       |       |
118762306a36Sopenharmony_ci	                                |       :       :       |       |
118862306a36Sopenharmony_ci	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
118962306a36Sopenharmony_ci	  barrier causes all effects      \     +-------+       |       |
119062306a36Sopenharmony_ci	  prior to the storage of B        ---->| A->1  |------>|       |
119162306a36Sopenharmony_ci	  to be perceptible to CPU 2            +-------+       |       |
119262306a36Sopenharmony_ci	                                        :       :       +-------+
119362306a36Sopenharmony_ci
119462306a36Sopenharmony_ci
119562306a36Sopenharmony_ciTo illustrate this more completely, consider what could happen if the code
119662306a36Sopenharmony_cicontained a load of A either side of the read barrier:
119762306a36Sopenharmony_ci
119862306a36Sopenharmony_ci	CPU 1			CPU 2
119962306a36Sopenharmony_ci	=======================	=======================
120062306a36Sopenharmony_ci		{ A = 0, B = 9 }
120162306a36Sopenharmony_ci	STORE A=1
120262306a36Sopenharmony_ci	<write barrier>
120362306a36Sopenharmony_ci	STORE B=2
120462306a36Sopenharmony_ci				LOAD B
120562306a36Sopenharmony_ci				LOAD A [first load of A]
120662306a36Sopenharmony_ci				<read barrier>
120762306a36Sopenharmony_ci				LOAD A [second load of A]
120862306a36Sopenharmony_ci
120962306a36Sopenharmony_ciEven though the two loads of A both occur after the load of B, they may both
121062306a36Sopenharmony_cicome up with different values:
121162306a36Sopenharmony_ci
121262306a36Sopenharmony_ci	+-------+       :      :                :       :
121362306a36Sopenharmony_ci	|       |       +------+                +-------+
121462306a36Sopenharmony_ci	|       |------>| A=1  |------      --->| A->0  |
121562306a36Sopenharmony_ci	|       |       +------+      \         +-------+
121662306a36Sopenharmony_ci	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
121762306a36Sopenharmony_ci	|       |       +------+        |       +-------+
121862306a36Sopenharmony_ci	|       |------>| B=2  |---     |       :       :
121962306a36Sopenharmony_ci	|       |       +------+   \    |       :       :       +-------+
122062306a36Sopenharmony_ci	+-------+       :      :    \   |       +-------+       |       |
122162306a36Sopenharmony_ci	                             ---------->| B->2  |------>|       |
122262306a36Sopenharmony_ci	                                |       +-------+       | CPU 2 |
122362306a36Sopenharmony_ci	                                |       :       :       |       |
122462306a36Sopenharmony_ci	                                |       :       :       |       |
122562306a36Sopenharmony_ci	                                |       +-------+       |       |
122662306a36Sopenharmony_ci	                                |       | A->0  |------>| 1st   |
122762306a36Sopenharmony_ci	                                |       +-------+       |       |
122862306a36Sopenharmony_ci	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
122962306a36Sopenharmony_ci	  barrier causes all effects      \     +-------+       |       |
123062306a36Sopenharmony_ci	  prior to the storage of B        ---->| A->1  |------>| 2nd   |
123162306a36Sopenharmony_ci	  to be perceptible to CPU 2            +-------+       |       |
123262306a36Sopenharmony_ci	                                        :       :       +-------+
123362306a36Sopenharmony_ci
123462306a36Sopenharmony_ci
123562306a36Sopenharmony_ciBut it may be that the update to A from CPU 1 becomes perceptible to CPU 2
123662306a36Sopenharmony_cibefore the read barrier completes anyway:
123762306a36Sopenharmony_ci
123862306a36Sopenharmony_ci	+-------+       :      :                :       :
123962306a36Sopenharmony_ci	|       |       +------+                +-------+
124062306a36Sopenharmony_ci	|       |------>| A=1  |------      --->| A->0  |
124162306a36Sopenharmony_ci	|       |       +------+      \         +-------+
124262306a36Sopenharmony_ci	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
124362306a36Sopenharmony_ci	|       |       +------+        |       +-------+
124462306a36Sopenharmony_ci	|       |------>| B=2  |---     |       :       :
124562306a36Sopenharmony_ci	|       |       +------+   \    |       :       :       +-------+
124662306a36Sopenharmony_ci	+-------+       :      :    \   |       +-------+       |       |
124762306a36Sopenharmony_ci	                             ---------->| B->2  |------>|       |
124862306a36Sopenharmony_ci	                                |       +-------+       | CPU 2 |
124962306a36Sopenharmony_ci	                                |       :       :       |       |
125062306a36Sopenharmony_ci	                                 \      :       :       |       |
125162306a36Sopenharmony_ci	                                  \     +-------+       |       |
125262306a36Sopenharmony_ci	                                   ---->| A->1  |------>| 1st   |
125362306a36Sopenharmony_ci	                                        +-------+       |       |
125462306a36Sopenharmony_ci	                                    rrrrrrrrrrrrrrrrr   |       |
125562306a36Sopenharmony_ci	                                        +-------+       |       |
125662306a36Sopenharmony_ci	                                        | A->1  |------>| 2nd   |
125762306a36Sopenharmony_ci	                                        +-------+       |       |
125862306a36Sopenharmony_ci	                                        :       :       +-------+
125962306a36Sopenharmony_ci
126062306a36Sopenharmony_ci
126162306a36Sopenharmony_ciThe guarantee is that the second load will always come up with A == 1 if the
126262306a36Sopenharmony_ciload of B came up with B == 2.  No such guarantee exists for the first load of
126362306a36Sopenharmony_ciA; that may come up with either A == 0 or A == 1.
126462306a36Sopenharmony_ci
126562306a36Sopenharmony_ci
126662306a36Sopenharmony_ciREAD MEMORY BARRIERS VS LOAD SPECULATION
126762306a36Sopenharmony_ci----------------------------------------
126862306a36Sopenharmony_ci
126962306a36Sopenharmony_ciMany CPUs speculate with loads: that is they see that they will need to load an
127062306a36Sopenharmony_ciitem from memory, and they find a time where they're not using the bus for any
127162306a36Sopenharmony_ciother loads, and so do the load in advance - even though they haven't actually
127262306a36Sopenharmony_cigot to that point in the instruction execution flow yet.  This permits the
127362306a36Sopenharmony_ciactual load instruction to potentially complete immediately because the CPU
127462306a36Sopenharmony_cialready has the value to hand.
127562306a36Sopenharmony_ci
127662306a36Sopenharmony_ciIt may turn out that the CPU didn't actually need the value - perhaps because a
127762306a36Sopenharmony_cibranch circumvented the load - in which case it can discard the value or just
127862306a36Sopenharmony_cicache it for later use.
127962306a36Sopenharmony_ci
128062306a36Sopenharmony_ciConsider:
128162306a36Sopenharmony_ci
128262306a36Sopenharmony_ci	CPU 1			CPU 2
128362306a36Sopenharmony_ci	=======================	=======================
128462306a36Sopenharmony_ci				LOAD B
128562306a36Sopenharmony_ci				DIVIDE		} Divide instructions generally
128662306a36Sopenharmony_ci				DIVIDE		} take a long time to perform
128762306a36Sopenharmony_ci				LOAD A
128862306a36Sopenharmony_ci
128962306a36Sopenharmony_ciWhich might appear as this:
129062306a36Sopenharmony_ci
129162306a36Sopenharmony_ci	                                        :       :       +-------+
129262306a36Sopenharmony_ci	                                        +-------+       |       |
129362306a36Sopenharmony_ci	                                    --->| B->2  |------>|       |
129462306a36Sopenharmony_ci	                                        +-------+       | CPU 2 |
129562306a36Sopenharmony_ci	                                        :       :DIVIDE |       |
129662306a36Sopenharmony_ci	                                        +-------+       |       |
129762306a36Sopenharmony_ci	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
129862306a36Sopenharmony_ci	division speculates on the              +-------+   ~   |       |
129962306a36Sopenharmony_ci	LOAD of A                               :       :   ~   |       |
130062306a36Sopenharmony_ci	                                        :       :DIVIDE |       |
130162306a36Sopenharmony_ci	                                        :       :   ~   |       |
130262306a36Sopenharmony_ci	Once the divisions are complete -->     :       :   ~-->|       |
130362306a36Sopenharmony_ci	the CPU can then perform the            :       :       |       |
130462306a36Sopenharmony_ci	LOAD with immediate effect              :       :       +-------+
130562306a36Sopenharmony_ci
130662306a36Sopenharmony_ci
130762306a36Sopenharmony_ciPlacing a read barrier or an address-dependency barrier just before the second
130862306a36Sopenharmony_ciload:
130962306a36Sopenharmony_ci
131062306a36Sopenharmony_ci	CPU 1			CPU 2
131162306a36Sopenharmony_ci	=======================	=======================
131262306a36Sopenharmony_ci				LOAD B
131362306a36Sopenharmony_ci				DIVIDE
131462306a36Sopenharmony_ci				DIVIDE
131562306a36Sopenharmony_ci				<read barrier>
131662306a36Sopenharmony_ci				LOAD A
131762306a36Sopenharmony_ci
131862306a36Sopenharmony_ciwill force any value speculatively obtained to be reconsidered to an extent
131962306a36Sopenharmony_cidependent on the type of barrier used.  If there was no change made to the
132062306a36Sopenharmony_cispeculated memory location, then the speculated value will just be used:
132162306a36Sopenharmony_ci
132262306a36Sopenharmony_ci	                                        :       :       +-------+
132362306a36Sopenharmony_ci	                                        +-------+       |       |
132462306a36Sopenharmony_ci	                                    --->| B->2  |------>|       |
132562306a36Sopenharmony_ci	                                        +-------+       | CPU 2 |
132662306a36Sopenharmony_ci	                                        :       :DIVIDE |       |
132762306a36Sopenharmony_ci	                                        +-------+       |       |
132862306a36Sopenharmony_ci	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
132962306a36Sopenharmony_ci	division speculates on the              +-------+   ~   |       |
133062306a36Sopenharmony_ci	LOAD of A                               :       :   ~   |       |
133162306a36Sopenharmony_ci	                                        :       :DIVIDE |       |
133262306a36Sopenharmony_ci	                                        :       :   ~   |       |
133362306a36Sopenharmony_ci	                                        :       :   ~   |       |
133462306a36Sopenharmony_ci	                                    rrrrrrrrrrrrrrrr~   |       |
133562306a36Sopenharmony_ci	                                        :       :   ~   |       |
133662306a36Sopenharmony_ci	                                        :       :   ~-->|       |
133762306a36Sopenharmony_ci	                                        :       :       |       |
133862306a36Sopenharmony_ci	                                        :       :       +-------+
133962306a36Sopenharmony_ci
134062306a36Sopenharmony_ci
134162306a36Sopenharmony_cibut if there was an update or an invalidation from another CPU pending, then
134262306a36Sopenharmony_cithe speculation will be cancelled and the value reloaded:
134362306a36Sopenharmony_ci
134462306a36Sopenharmony_ci	                                        :       :       +-------+
134562306a36Sopenharmony_ci	                                        +-------+       |       |
134662306a36Sopenharmony_ci	                                    --->| B->2  |------>|       |
134762306a36Sopenharmony_ci	                                        +-------+       | CPU 2 |
134862306a36Sopenharmony_ci	                                        :       :DIVIDE |       |
134962306a36Sopenharmony_ci	                                        +-------+       |       |
135062306a36Sopenharmony_ci	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
135162306a36Sopenharmony_ci	division speculates on the              +-------+   ~   |       |
135262306a36Sopenharmony_ci	LOAD of A                               :       :   ~   |       |
135362306a36Sopenharmony_ci	                                        :       :DIVIDE |       |
135462306a36Sopenharmony_ci	                                        :       :   ~   |       |
135562306a36Sopenharmony_ci	                                        :       :   ~   |       |
135662306a36Sopenharmony_ci	                                    rrrrrrrrrrrrrrrrr   |       |
135762306a36Sopenharmony_ci	                                        +-------+       |       |
135862306a36Sopenharmony_ci	The speculation is discarded --->   --->| A->1  |------>|       |
135962306a36Sopenharmony_ci	and an updated value is                 +-------+       |       |
136062306a36Sopenharmony_ci	retrieved                               :       :       +-------+
136162306a36Sopenharmony_ci
136262306a36Sopenharmony_ci
136362306a36Sopenharmony_ciMULTICOPY ATOMICITY
136462306a36Sopenharmony_ci--------------------
136562306a36Sopenharmony_ci
136662306a36Sopenharmony_ciMulticopy atomicity is a deeply intuitive notion about ordering that is
136762306a36Sopenharmony_cinot always provided by real computer systems, namely that a given store
136862306a36Sopenharmony_cibecomes visible at the same time to all CPUs, or, alternatively, that all
136962306a36Sopenharmony_ciCPUs agree on the order in which all stores become visible.  However,
137062306a36Sopenharmony_cisupport of full multicopy atomicity would rule out valuable hardware
137162306a36Sopenharmony_cioptimizations, so a weaker form called ``other multicopy atomicity''
137262306a36Sopenharmony_ciinstead guarantees only that a given store becomes visible at the same
137362306a36Sopenharmony_citime to all -other- CPUs.  The remainder of this document discusses this
137462306a36Sopenharmony_ciweaker form, but for brevity will call it simply ``multicopy atomicity''.
137562306a36Sopenharmony_ci
137662306a36Sopenharmony_ciThe following example demonstrates multicopy atomicity:
137762306a36Sopenharmony_ci
137862306a36Sopenharmony_ci	CPU 1			CPU 2			CPU 3
137962306a36Sopenharmony_ci	=======================	=======================	=======================
138062306a36Sopenharmony_ci		{ X = 0, Y = 0 }
138162306a36Sopenharmony_ci	STORE X=1		r1=LOAD X (reads 1)	LOAD Y (reads 1)
138262306a36Sopenharmony_ci				<general barrier>	<read barrier>
138362306a36Sopenharmony_ci				STORE Y=r1		LOAD X
138462306a36Sopenharmony_ci
138562306a36Sopenharmony_ciSuppose that CPU 2's load from X returns 1, which it then stores to Y,
138662306a36Sopenharmony_ciand CPU 3's load from Y returns 1.  This indicates that CPU 1's store
138762306a36Sopenharmony_cito X precedes CPU 2's load from X and that CPU 2's store to Y precedes
138862306a36Sopenharmony_ciCPU 3's load from Y.  In addition, the memory barriers guarantee that
138962306a36Sopenharmony_ciCPU 2 executes its load before its store, and CPU 3 loads from Y before
139062306a36Sopenharmony_ciit loads from X.  The question is then "Can CPU 3's load from X return 0?"
139162306a36Sopenharmony_ci
139262306a36Sopenharmony_ciBecause CPU 3's load from X in some sense comes after CPU 2's load, it
139362306a36Sopenharmony_ciis natural to expect that CPU 3's load from X must therefore return 1.
139462306a36Sopenharmony_ciThis expectation follows from multicopy atomicity: if a load executing
139562306a36Sopenharmony_cion CPU B follows a load from the same variable executing on CPU A (and
139662306a36Sopenharmony_ciCPU A did not originally store the value which it read), then on
139762306a36Sopenharmony_cimulticopy-atomic systems, CPU B's load must return either the same value
139862306a36Sopenharmony_cithat CPU A's load did or some later value.  However, the Linux kernel
139962306a36Sopenharmony_cidoes not require systems to be multicopy atomic.
140062306a36Sopenharmony_ci
140162306a36Sopenharmony_ciThe use of a general memory barrier in the example above compensates
140262306a36Sopenharmony_cifor any lack of multicopy atomicity.  In the example, if CPU 2's load
140362306a36Sopenharmony_cifrom X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load
140462306a36Sopenharmony_cifrom X must indeed also return 1.
140562306a36Sopenharmony_ci
140662306a36Sopenharmony_ciHowever, dependencies, read barriers, and write barriers are not always
140762306a36Sopenharmony_ciable to compensate for non-multicopy atomicity.  For example, suppose
140862306a36Sopenharmony_cithat CPU 2's general barrier is removed from the above example, leaving
140962306a36Sopenharmony_cionly the data dependency shown below:
141062306a36Sopenharmony_ci
141162306a36Sopenharmony_ci	CPU 1			CPU 2			CPU 3
141262306a36Sopenharmony_ci	=======================	=======================	=======================
141362306a36Sopenharmony_ci		{ X = 0, Y = 0 }
141462306a36Sopenharmony_ci	STORE X=1		r1=LOAD X (reads 1)	LOAD Y (reads 1)
141562306a36Sopenharmony_ci				<data dependency>	<read barrier>
141662306a36Sopenharmony_ci				STORE Y=r1		LOAD X (reads 0)
141762306a36Sopenharmony_ci
141862306a36Sopenharmony_ciThis substitution allows non-multicopy atomicity to run rampant: in
141962306a36Sopenharmony_cithis example, it is perfectly legal for CPU 2's load from X to return 1,
142062306a36Sopenharmony_ciCPU 3's load from Y to return 1, and its load from X to return 0.
142162306a36Sopenharmony_ci
142262306a36Sopenharmony_ciThe key point is that although CPU 2's data dependency orders its load
142362306a36Sopenharmony_ciand store, it does not guarantee to order CPU 1's store.  Thus, if this
142462306a36Sopenharmony_ciexample runs on a non-multicopy-atomic system where CPUs 1 and 2 share a
142562306a36Sopenharmony_cistore buffer or a level of cache, CPU 2 might have early access to CPU 1's
142662306a36Sopenharmony_ciwrites.  General barriers are therefore required to ensure that all CPUs
142762306a36Sopenharmony_ciagree on the combined order of multiple accesses.
142862306a36Sopenharmony_ci
142962306a36Sopenharmony_ciGeneral barriers can compensate not only for non-multicopy atomicity,
143062306a36Sopenharmony_cibut can also generate additional ordering that can ensure that -all-
143162306a36Sopenharmony_ciCPUs will perceive the same order of -all- operations.  In contrast, a
143262306a36Sopenharmony_cichain of release-acquire pairs do not provide this additional ordering,
143362306a36Sopenharmony_ciwhich means that only those CPUs on the chain are guaranteed to agree
143462306a36Sopenharmony_cion the combined order of the accesses.  For example, switching to C code
143562306a36Sopenharmony_ciin deference to the ghost of Herman Hollerith:
143662306a36Sopenharmony_ci
143762306a36Sopenharmony_ci	int u, v, x, y, z;
143862306a36Sopenharmony_ci
143962306a36Sopenharmony_ci	void cpu0(void)
144062306a36Sopenharmony_ci	{
144162306a36Sopenharmony_ci		r0 = smp_load_acquire(&x);
144262306a36Sopenharmony_ci		WRITE_ONCE(u, 1);
144362306a36Sopenharmony_ci		smp_store_release(&y, 1);
144462306a36Sopenharmony_ci	}
144562306a36Sopenharmony_ci
144662306a36Sopenharmony_ci	void cpu1(void)
144762306a36Sopenharmony_ci	{
144862306a36Sopenharmony_ci		r1 = smp_load_acquire(&y);
144962306a36Sopenharmony_ci		r4 = READ_ONCE(v);
145062306a36Sopenharmony_ci		r5 = READ_ONCE(u);
145162306a36Sopenharmony_ci		smp_store_release(&z, 1);
145262306a36Sopenharmony_ci	}
145362306a36Sopenharmony_ci
145462306a36Sopenharmony_ci	void cpu2(void)
145562306a36Sopenharmony_ci	{
145662306a36Sopenharmony_ci		r2 = smp_load_acquire(&z);
145762306a36Sopenharmony_ci		smp_store_release(&x, 1);
145862306a36Sopenharmony_ci	}
145962306a36Sopenharmony_ci
146062306a36Sopenharmony_ci	void cpu3(void)
146162306a36Sopenharmony_ci	{
146262306a36Sopenharmony_ci		WRITE_ONCE(v, 1);
146362306a36Sopenharmony_ci		smp_mb();
146462306a36Sopenharmony_ci		r3 = READ_ONCE(u);
146562306a36Sopenharmony_ci	}
146662306a36Sopenharmony_ci
146762306a36Sopenharmony_ciBecause cpu0(), cpu1(), and cpu2() participate in a chain of
146862306a36Sopenharmony_cismp_store_release()/smp_load_acquire() pairs, the following outcome
146962306a36Sopenharmony_ciis prohibited:
147062306a36Sopenharmony_ci
147162306a36Sopenharmony_ci	r0 == 1 && r1 == 1 && r2 == 1
147262306a36Sopenharmony_ci
147362306a36Sopenharmony_ciFurthermore, because of the release-acquire relationship between cpu0()
147462306a36Sopenharmony_ciand cpu1(), cpu1() must see cpu0()'s writes, so that the following
147562306a36Sopenharmony_cioutcome is prohibited:
147662306a36Sopenharmony_ci
147762306a36Sopenharmony_ci	r1 == 1 && r5 == 0
147862306a36Sopenharmony_ci
147962306a36Sopenharmony_ciHowever, the ordering provided by a release-acquire chain is local
148062306a36Sopenharmony_cito the CPUs participating in that chain and does not apply to cpu3(),
148162306a36Sopenharmony_ciat least aside from stores.  Therefore, the following outcome is possible:
148262306a36Sopenharmony_ci
148362306a36Sopenharmony_ci	r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
148462306a36Sopenharmony_ci
148562306a36Sopenharmony_ciAs an aside, the following outcome is also possible:
148662306a36Sopenharmony_ci
148762306a36Sopenharmony_ci	r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
148862306a36Sopenharmony_ci
148962306a36Sopenharmony_ciAlthough cpu0(), cpu1(), and cpu2() will see their respective reads and
149062306a36Sopenharmony_ciwrites in order, CPUs not involved in the release-acquire chain might
149162306a36Sopenharmony_ciwell disagree on the order.  This disagreement stems from the fact that
149262306a36Sopenharmony_cithe weak memory-barrier instructions used to implement smp_load_acquire()
149362306a36Sopenharmony_ciand smp_store_release() are not required to order prior stores against
149462306a36Sopenharmony_cisubsequent loads in all cases.  This means that cpu3() can see cpu0()'s
149562306a36Sopenharmony_cistore to u as happening -after- cpu1()'s load from v, even though
149662306a36Sopenharmony_ciboth cpu0() and cpu1() agree that these two operations occurred in the
149762306a36Sopenharmony_ciintended order.
149862306a36Sopenharmony_ci
149962306a36Sopenharmony_ciHowever, please keep in mind that smp_load_acquire() is not magic.
150062306a36Sopenharmony_ciIn particular, it simply reads from its argument with ordering.  It does
150162306a36Sopenharmony_ci-not- ensure that any particular value will be read.  Therefore, the
150262306a36Sopenharmony_cifollowing outcome is possible:
150362306a36Sopenharmony_ci
150462306a36Sopenharmony_ci	r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
150562306a36Sopenharmony_ci
150662306a36Sopenharmony_ciNote that this outcome can happen even on a mythical sequentially
150762306a36Sopenharmony_ciconsistent system where nothing is ever reordered.
150862306a36Sopenharmony_ci
150962306a36Sopenharmony_ciTo reiterate, if your code requires full ordering of all operations,
151062306a36Sopenharmony_ciuse general barriers throughout.
151162306a36Sopenharmony_ci
151262306a36Sopenharmony_ci
151362306a36Sopenharmony_ci========================
151462306a36Sopenharmony_ciEXPLICIT KERNEL BARRIERS
151562306a36Sopenharmony_ci========================
151662306a36Sopenharmony_ci
151762306a36Sopenharmony_ciThe Linux kernel has a variety of different barriers that act at different
151862306a36Sopenharmony_cilevels:
151962306a36Sopenharmony_ci
152062306a36Sopenharmony_ci  (*) Compiler barrier.
152162306a36Sopenharmony_ci
152262306a36Sopenharmony_ci  (*) CPU memory barriers.
152362306a36Sopenharmony_ci
152462306a36Sopenharmony_ci
152562306a36Sopenharmony_ciCOMPILER BARRIER
152662306a36Sopenharmony_ci----------------
152762306a36Sopenharmony_ci
152862306a36Sopenharmony_ciThe Linux kernel has an explicit compiler barrier function that prevents the
152962306a36Sopenharmony_cicompiler from moving the memory accesses either side of it to the other side:
153062306a36Sopenharmony_ci
153162306a36Sopenharmony_ci	barrier();
153262306a36Sopenharmony_ci
153362306a36Sopenharmony_ciThis is a general barrier -- there are no read-read or write-write
153462306a36Sopenharmony_civariants of barrier().  However, READ_ONCE() and WRITE_ONCE() can be
153562306a36Sopenharmony_cithought of as weak forms of barrier() that affect only the specific
153662306a36Sopenharmony_ciaccesses flagged by the READ_ONCE() or WRITE_ONCE().
153762306a36Sopenharmony_ci
153862306a36Sopenharmony_ciThe barrier() function has the following effects:
153962306a36Sopenharmony_ci
154062306a36Sopenharmony_ci (*) Prevents the compiler from reordering accesses following the
154162306a36Sopenharmony_ci     barrier() to precede any accesses preceding the barrier().
154262306a36Sopenharmony_ci     One example use for this property is to ease communication between
154362306a36Sopenharmony_ci     interrupt-handler code and the code that was interrupted.
154462306a36Sopenharmony_ci
154562306a36Sopenharmony_ci (*) Within a loop, forces the compiler to load the variables used
154662306a36Sopenharmony_ci     in that loop's conditional on each pass through that loop.
154762306a36Sopenharmony_ci
154862306a36Sopenharmony_ciThe READ_ONCE() and WRITE_ONCE() functions can prevent any number of
154962306a36Sopenharmony_cioptimizations that, while perfectly safe in single-threaded code, can
155062306a36Sopenharmony_cibe fatal in concurrent code.  Here are some examples of these sorts
155162306a36Sopenharmony_ciof optimizations:
155262306a36Sopenharmony_ci
155362306a36Sopenharmony_ci (*) The compiler is within its rights to reorder loads and stores
155462306a36Sopenharmony_ci     to the same variable, and in some cases, the CPU is within its
155562306a36Sopenharmony_ci     rights to reorder loads to the same variable.  This means that
155662306a36Sopenharmony_ci     the following code:
155762306a36Sopenharmony_ci
155862306a36Sopenharmony_ci	a[0] = x;
155962306a36Sopenharmony_ci	a[1] = x;
156062306a36Sopenharmony_ci
156162306a36Sopenharmony_ci     Might result in an older value of x stored in a[1] than in a[0].
156262306a36Sopenharmony_ci     Prevent both the compiler and the CPU from doing this as follows:
156362306a36Sopenharmony_ci
156462306a36Sopenharmony_ci	a[0] = READ_ONCE(x);
156562306a36Sopenharmony_ci	a[1] = READ_ONCE(x);
156662306a36Sopenharmony_ci
156762306a36Sopenharmony_ci     In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
156862306a36Sopenharmony_ci     accesses from multiple CPUs to a single variable.
156962306a36Sopenharmony_ci
157062306a36Sopenharmony_ci (*) The compiler is within its rights to merge successive loads from
157162306a36Sopenharmony_ci     the same variable.  Such merging can cause the compiler to "optimize"
157262306a36Sopenharmony_ci     the following code:
157362306a36Sopenharmony_ci
157462306a36Sopenharmony_ci	while (tmp = a)
157562306a36Sopenharmony_ci		do_something_with(tmp);
157662306a36Sopenharmony_ci
157762306a36Sopenharmony_ci     into the following code, which, although in some sense legitimate
157862306a36Sopenharmony_ci     for single-threaded code, is almost certainly not what the developer
157962306a36Sopenharmony_ci     intended:
158062306a36Sopenharmony_ci
158162306a36Sopenharmony_ci	if (tmp = a)
158262306a36Sopenharmony_ci		for (;;)
158362306a36Sopenharmony_ci			do_something_with(tmp);
158462306a36Sopenharmony_ci
158562306a36Sopenharmony_ci     Use READ_ONCE() to prevent the compiler from doing this to you:
158662306a36Sopenharmony_ci
158762306a36Sopenharmony_ci	while (tmp = READ_ONCE(a))
158862306a36Sopenharmony_ci		do_something_with(tmp);
158962306a36Sopenharmony_ci
159062306a36Sopenharmony_ci (*) The compiler is within its rights to reload a variable, for example,
159162306a36Sopenharmony_ci     in cases where high register pressure prevents the compiler from
159262306a36Sopenharmony_ci     keeping all data of interest in registers.  The compiler might
159362306a36Sopenharmony_ci     therefore optimize the variable 'tmp' out of our previous example:
159462306a36Sopenharmony_ci
159562306a36Sopenharmony_ci	while (tmp = a)
159662306a36Sopenharmony_ci		do_something_with(tmp);
159762306a36Sopenharmony_ci
159862306a36Sopenharmony_ci     This could result in the following code, which is perfectly safe in
159962306a36Sopenharmony_ci     single-threaded code, but can be fatal in concurrent code:
160062306a36Sopenharmony_ci
160162306a36Sopenharmony_ci	while (a)
160262306a36Sopenharmony_ci		do_something_with(a);
160362306a36Sopenharmony_ci
160462306a36Sopenharmony_ci     For example, the optimized version of this code could result in
160562306a36Sopenharmony_ci     passing a zero to do_something_with() in the case where the variable
160662306a36Sopenharmony_ci     a was modified by some other CPU between the "while" statement and
160762306a36Sopenharmony_ci     the call to do_something_with().
160862306a36Sopenharmony_ci
160962306a36Sopenharmony_ci     Again, use READ_ONCE() to prevent the compiler from doing this:
161062306a36Sopenharmony_ci
161162306a36Sopenharmony_ci	while (tmp = READ_ONCE(a))
161262306a36Sopenharmony_ci		do_something_with(tmp);
161362306a36Sopenharmony_ci
161462306a36Sopenharmony_ci     Note that if the compiler runs short of registers, it might save
161562306a36Sopenharmony_ci     tmp onto the stack.  The overhead of this saving and later restoring
161662306a36Sopenharmony_ci     is why compilers reload variables.  Doing so is perfectly safe for
161762306a36Sopenharmony_ci     single-threaded code, so you need to tell the compiler about cases
161862306a36Sopenharmony_ci     where it is not safe.
161962306a36Sopenharmony_ci
162062306a36Sopenharmony_ci (*) The compiler is within its rights to omit a load entirely if it knows
162162306a36Sopenharmony_ci     what the value will be.  For example, if the compiler can prove that
162262306a36Sopenharmony_ci     the value of variable 'a' is always zero, it can optimize this code:
162362306a36Sopenharmony_ci
162462306a36Sopenharmony_ci	while (tmp = a)
162562306a36Sopenharmony_ci		do_something_with(tmp);
162662306a36Sopenharmony_ci
162762306a36Sopenharmony_ci     Into this:
162862306a36Sopenharmony_ci
162962306a36Sopenharmony_ci	do { } while (0);
163062306a36Sopenharmony_ci
163162306a36Sopenharmony_ci     This transformation is a win for single-threaded code because it
163262306a36Sopenharmony_ci     gets rid of a load and a branch.  The problem is that the compiler
163362306a36Sopenharmony_ci     will carry out its proof assuming that the current CPU is the only
163462306a36Sopenharmony_ci     one updating variable 'a'.  If variable 'a' is shared, then the
163562306a36Sopenharmony_ci     compiler's proof will be erroneous.  Use READ_ONCE() to tell the
163662306a36Sopenharmony_ci     compiler that it doesn't know as much as it thinks it does:
163762306a36Sopenharmony_ci
163862306a36Sopenharmony_ci	while (tmp = READ_ONCE(a))
163962306a36Sopenharmony_ci		do_something_with(tmp);
164062306a36Sopenharmony_ci
164162306a36Sopenharmony_ci     But please note that the compiler is also closely watching what you
164262306a36Sopenharmony_ci     do with the value after the READ_ONCE().  For example, suppose you
164362306a36Sopenharmony_ci     do the following and MAX is a preprocessor macro with the value 1:
164462306a36Sopenharmony_ci
164562306a36Sopenharmony_ci	while ((tmp = READ_ONCE(a)) % MAX)
164662306a36Sopenharmony_ci		do_something_with(tmp);
164762306a36Sopenharmony_ci
164862306a36Sopenharmony_ci     Then the compiler knows that the result of the "%" operator applied
164962306a36Sopenharmony_ci     to MAX will always be zero, again allowing the compiler to optimize
165062306a36Sopenharmony_ci     the code into near-nonexistence.  (It will still load from the
165162306a36Sopenharmony_ci     variable 'a'.)
165262306a36Sopenharmony_ci
165362306a36Sopenharmony_ci (*) Similarly, the compiler is within its rights to omit a store entirely
165462306a36Sopenharmony_ci     if it knows that the variable already has the value being stored.
165562306a36Sopenharmony_ci     Again, the compiler assumes that the current CPU is the only one
165662306a36Sopenharmony_ci     storing into the variable, which can cause the compiler to do the
165762306a36Sopenharmony_ci     wrong thing for shared variables.  For example, suppose you have
165862306a36Sopenharmony_ci     the following:
165962306a36Sopenharmony_ci
166062306a36Sopenharmony_ci	a = 0;
166162306a36Sopenharmony_ci	... Code that does not store to variable a ...
166262306a36Sopenharmony_ci	a = 0;
166362306a36Sopenharmony_ci
166462306a36Sopenharmony_ci     The compiler sees that the value of variable 'a' is already zero, so
166562306a36Sopenharmony_ci     it might well omit the second store.  This would come as a fatal
166662306a36Sopenharmony_ci     surprise if some other CPU might have stored to variable 'a' in the
166762306a36Sopenharmony_ci     meantime.
166862306a36Sopenharmony_ci
166962306a36Sopenharmony_ci     Use WRITE_ONCE() to prevent the compiler from making this sort of
167062306a36Sopenharmony_ci     wrong guess:
167162306a36Sopenharmony_ci
167262306a36Sopenharmony_ci	WRITE_ONCE(a, 0);
167362306a36Sopenharmony_ci	... Code that does not store to variable a ...
167462306a36Sopenharmony_ci	WRITE_ONCE(a, 0);
167562306a36Sopenharmony_ci
167662306a36Sopenharmony_ci (*) The compiler is within its rights to reorder memory accesses unless
167762306a36Sopenharmony_ci     you tell it not to.  For example, consider the following interaction
167862306a36Sopenharmony_ci     between process-level code and an interrupt handler:
167962306a36Sopenharmony_ci
168062306a36Sopenharmony_ci	void process_level(void)
168162306a36Sopenharmony_ci	{
168262306a36Sopenharmony_ci		msg = get_message();
168362306a36Sopenharmony_ci		flag = true;
168462306a36Sopenharmony_ci	}
168562306a36Sopenharmony_ci
168662306a36Sopenharmony_ci	void interrupt_handler(void)
168762306a36Sopenharmony_ci	{
168862306a36Sopenharmony_ci		if (flag)
168962306a36Sopenharmony_ci			process_message(msg);
169062306a36Sopenharmony_ci	}
169162306a36Sopenharmony_ci
169262306a36Sopenharmony_ci     There is nothing to prevent the compiler from transforming
169362306a36Sopenharmony_ci     process_level() to the following, in fact, this might well be a
169462306a36Sopenharmony_ci     win for single-threaded code:
169562306a36Sopenharmony_ci
169662306a36Sopenharmony_ci	void process_level(void)
169762306a36Sopenharmony_ci	{
169862306a36Sopenharmony_ci		flag = true;
169962306a36Sopenharmony_ci		msg = get_message();
170062306a36Sopenharmony_ci	}
170162306a36Sopenharmony_ci
170262306a36Sopenharmony_ci     If the interrupt occurs between these two statement, then
170362306a36Sopenharmony_ci     interrupt_handler() might be passed a garbled msg.  Use WRITE_ONCE()
170462306a36Sopenharmony_ci     to prevent this as follows:
170562306a36Sopenharmony_ci
170662306a36Sopenharmony_ci	void process_level(void)
170762306a36Sopenharmony_ci	{
170862306a36Sopenharmony_ci		WRITE_ONCE(msg, get_message());
170962306a36Sopenharmony_ci		WRITE_ONCE(flag, true);
171062306a36Sopenharmony_ci	}
171162306a36Sopenharmony_ci
171262306a36Sopenharmony_ci	void interrupt_handler(void)
171362306a36Sopenharmony_ci	{
171462306a36Sopenharmony_ci		if (READ_ONCE(flag))
171562306a36Sopenharmony_ci			process_message(READ_ONCE(msg));
171662306a36Sopenharmony_ci	}
171762306a36Sopenharmony_ci
171862306a36Sopenharmony_ci     Note that the READ_ONCE() and WRITE_ONCE() wrappers in
171962306a36Sopenharmony_ci     interrupt_handler() are needed if this interrupt handler can itself
172062306a36Sopenharmony_ci     be interrupted by something that also accesses 'flag' and 'msg',
172162306a36Sopenharmony_ci     for example, a nested interrupt or an NMI.  Otherwise, READ_ONCE()
172262306a36Sopenharmony_ci     and WRITE_ONCE() are not needed in interrupt_handler() other than
172362306a36Sopenharmony_ci     for documentation purposes.  (Note also that nested interrupts
172462306a36Sopenharmony_ci     do not typically occur in modern Linux kernels, in fact, if an
172562306a36Sopenharmony_ci     interrupt handler returns with interrupts enabled, you will get a
172662306a36Sopenharmony_ci     WARN_ONCE() splat.)
172762306a36Sopenharmony_ci
172862306a36Sopenharmony_ci     You should assume that the compiler can move READ_ONCE() and
172962306a36Sopenharmony_ci     WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
173062306a36Sopenharmony_ci     barrier(), or similar primitives.
173162306a36Sopenharmony_ci
173262306a36Sopenharmony_ci     This effect could also be achieved using barrier(), but READ_ONCE()
173362306a36Sopenharmony_ci     and WRITE_ONCE() are more selective:  With READ_ONCE() and
173462306a36Sopenharmony_ci     WRITE_ONCE(), the compiler need only forget the contents of the
173562306a36Sopenharmony_ci     indicated memory locations, while with barrier() the compiler must
173662306a36Sopenharmony_ci     discard the value of all memory locations that it has currently
173762306a36Sopenharmony_ci     cached in any machine registers.  Of course, the compiler must also
173862306a36Sopenharmony_ci     respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
173962306a36Sopenharmony_ci     though the CPU of course need not do so.
174062306a36Sopenharmony_ci
174162306a36Sopenharmony_ci (*) The compiler is within its rights to invent stores to a variable,
174262306a36Sopenharmony_ci     as in the following example:
174362306a36Sopenharmony_ci
174462306a36Sopenharmony_ci	if (a)
174562306a36Sopenharmony_ci		b = a;
174662306a36Sopenharmony_ci	else
174762306a36Sopenharmony_ci		b = 42;
174862306a36Sopenharmony_ci
174962306a36Sopenharmony_ci     The compiler might save a branch by optimizing this as follows:
175062306a36Sopenharmony_ci
175162306a36Sopenharmony_ci	b = 42;
175262306a36Sopenharmony_ci	if (a)
175362306a36Sopenharmony_ci		b = a;
175462306a36Sopenharmony_ci
175562306a36Sopenharmony_ci     In single-threaded code, this is not only safe, but also saves
175662306a36Sopenharmony_ci     a branch.  Unfortunately, in concurrent code, this optimization
175762306a36Sopenharmony_ci     could cause some other CPU to see a spurious value of 42 -- even
175862306a36Sopenharmony_ci     if variable 'a' was never zero -- when loading variable 'b'.
175962306a36Sopenharmony_ci     Use WRITE_ONCE() to prevent this as follows:
176062306a36Sopenharmony_ci
176162306a36Sopenharmony_ci	if (a)
176262306a36Sopenharmony_ci		WRITE_ONCE(b, a);
176362306a36Sopenharmony_ci	else
176462306a36Sopenharmony_ci		WRITE_ONCE(b, 42);
176562306a36Sopenharmony_ci
176662306a36Sopenharmony_ci     The compiler can also invent loads.  These are usually less
176762306a36Sopenharmony_ci     damaging, but they can result in cache-line bouncing and thus in
176862306a36Sopenharmony_ci     poor performance and scalability.  Use READ_ONCE() to prevent
176962306a36Sopenharmony_ci     invented loads.
177062306a36Sopenharmony_ci
177162306a36Sopenharmony_ci (*) For aligned memory locations whose size allows them to be accessed
177262306a36Sopenharmony_ci     with a single memory-reference instruction, prevents "load tearing"
177362306a36Sopenharmony_ci     and "store tearing," in which a single large access is replaced by
177462306a36Sopenharmony_ci     multiple smaller accesses.  For example, given an architecture having
177562306a36Sopenharmony_ci     16-bit store instructions with 7-bit immediate fields, the compiler
177662306a36Sopenharmony_ci     might be tempted to use two 16-bit store-immediate instructions to
177762306a36Sopenharmony_ci     implement the following 32-bit store:
177862306a36Sopenharmony_ci
177962306a36Sopenharmony_ci	p = 0x00010002;
178062306a36Sopenharmony_ci
178162306a36Sopenharmony_ci     Please note that GCC really does use this sort of optimization,
178262306a36Sopenharmony_ci     which is not surprising given that it would likely take more
178362306a36Sopenharmony_ci     than two instructions to build the constant and then store it.
178462306a36Sopenharmony_ci     This optimization can therefore be a win in single-threaded code.
178562306a36Sopenharmony_ci     In fact, a recent bug (since fixed) caused GCC to incorrectly use
178662306a36Sopenharmony_ci     this optimization in a volatile store.  In the absence of such bugs,
178762306a36Sopenharmony_ci     use of WRITE_ONCE() prevents store tearing in the following example:
178862306a36Sopenharmony_ci
178962306a36Sopenharmony_ci	WRITE_ONCE(p, 0x00010002);
179062306a36Sopenharmony_ci
179162306a36Sopenharmony_ci     Use of packed structures can also result in load and store tearing,
179262306a36Sopenharmony_ci     as in this example:
179362306a36Sopenharmony_ci
179462306a36Sopenharmony_ci	struct __attribute__((__packed__)) foo {
179562306a36Sopenharmony_ci		short a;
179662306a36Sopenharmony_ci		int b;
179762306a36Sopenharmony_ci		short c;
179862306a36Sopenharmony_ci	};
179962306a36Sopenharmony_ci	struct foo foo1, foo2;
180062306a36Sopenharmony_ci	...
180162306a36Sopenharmony_ci
180262306a36Sopenharmony_ci	foo2.a = foo1.a;
180362306a36Sopenharmony_ci	foo2.b = foo1.b;
180462306a36Sopenharmony_ci	foo2.c = foo1.c;
180562306a36Sopenharmony_ci
180662306a36Sopenharmony_ci     Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
180762306a36Sopenharmony_ci     volatile markings, the compiler would be well within its rights to
180862306a36Sopenharmony_ci     implement these three assignment statements as a pair of 32-bit
180962306a36Sopenharmony_ci     loads followed by a pair of 32-bit stores.  This would result in
181062306a36Sopenharmony_ci     load tearing on 'foo1.b' and store tearing on 'foo2.b'.  READ_ONCE()
181162306a36Sopenharmony_ci     and WRITE_ONCE() again prevent tearing in this example:
181262306a36Sopenharmony_ci
181362306a36Sopenharmony_ci	foo2.a = foo1.a;
181462306a36Sopenharmony_ci	WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
181562306a36Sopenharmony_ci	foo2.c = foo1.c;
181662306a36Sopenharmony_ci
181762306a36Sopenharmony_ciAll that aside, it is never necessary to use READ_ONCE() and
181862306a36Sopenharmony_ciWRITE_ONCE() on a variable that has been marked volatile.  For example,
181962306a36Sopenharmony_cibecause 'jiffies' is marked volatile, it is never necessary to
182062306a36Sopenharmony_cisay READ_ONCE(jiffies).  The reason for this is that READ_ONCE() and
182162306a36Sopenharmony_ciWRITE_ONCE() are implemented as volatile casts, which has no effect when
182262306a36Sopenharmony_ciits argument is already marked volatile.
182362306a36Sopenharmony_ci
182462306a36Sopenharmony_ciPlease note that these compiler barriers have no direct effect on the CPU,
182562306a36Sopenharmony_ciwhich may then reorder things however it wishes.
182662306a36Sopenharmony_ci
182762306a36Sopenharmony_ci
182862306a36Sopenharmony_ciCPU MEMORY BARRIERS
182962306a36Sopenharmony_ci-------------------
183062306a36Sopenharmony_ci
183162306a36Sopenharmony_ciThe Linux kernel has seven basic CPU memory barriers:
183262306a36Sopenharmony_ci
183362306a36Sopenharmony_ci	TYPE			MANDATORY	SMP CONDITIONAL
183462306a36Sopenharmony_ci	=======================	===============	===============
183562306a36Sopenharmony_ci	GENERAL			mb()		smp_mb()
183662306a36Sopenharmony_ci	WRITE			wmb()		smp_wmb()
183762306a36Sopenharmony_ci	READ			rmb()		smp_rmb()
183862306a36Sopenharmony_ci	ADDRESS DEPENDENCY			READ_ONCE()
183962306a36Sopenharmony_ci
184062306a36Sopenharmony_ci
184162306a36Sopenharmony_ciAll memory barriers except the address-dependency barriers imply a compiler
184262306a36Sopenharmony_cibarrier.  Address dependencies do not impose any additional compiler ordering.
184362306a36Sopenharmony_ci
184462306a36Sopenharmony_ciAside: In the case of address dependencies, the compiler would be expected
184562306a36Sopenharmony_cito issue the loads in the correct order (eg. `a[b]` would have to load
184662306a36Sopenharmony_cithe value of b before loading a[b]), however there is no guarantee in
184762306a36Sopenharmony_cithe C specification that the compiler may not speculate the value of b
184862306a36Sopenharmony_ci(eg. is equal to 1) and load a[b] before b (eg. tmp = a[1]; if (b != 1)
184962306a36Sopenharmony_citmp = a[b]; ).  There is also the problem of a compiler reloading b after
185062306a36Sopenharmony_cihaving loaded a[b], thus having a newer copy of b than a[b].  A consensus
185162306a36Sopenharmony_cihas not yet been reached about these problems, however the READ_ONCE()
185262306a36Sopenharmony_cimacro is a good place to start looking.
185362306a36Sopenharmony_ci
185462306a36Sopenharmony_ciSMP memory barriers are reduced to compiler barriers on uniprocessor compiled
185562306a36Sopenharmony_cisystems because it is assumed that a CPU will appear to be self-consistent,
185662306a36Sopenharmony_ciand will order overlapping accesses correctly with respect to itself.
185762306a36Sopenharmony_ciHowever, see the subsection on "Virtual Machine Guests" below.
185862306a36Sopenharmony_ci
185962306a36Sopenharmony_ci[!] Note that SMP memory barriers _must_ be used to control the ordering of
186062306a36Sopenharmony_cireferences to shared memory on SMP systems, though the use of locking instead
186162306a36Sopenharmony_ciis sufficient.
186262306a36Sopenharmony_ci
186362306a36Sopenharmony_ciMandatory barriers should not be used to control SMP effects, since mandatory
186462306a36Sopenharmony_cibarriers impose unnecessary overhead on both SMP and UP systems. They may,
186562306a36Sopenharmony_cihowever, be used to control MMIO effects on accesses through relaxed memory I/O
186662306a36Sopenharmony_ciwindows.  These barriers are required even on non-SMP systems as they affect
186762306a36Sopenharmony_cithe order in which memory operations appear to a device by prohibiting both the
186862306a36Sopenharmony_cicompiler and the CPU from reordering them.
186962306a36Sopenharmony_ci
187062306a36Sopenharmony_ci
187162306a36Sopenharmony_ciThere are some more advanced barrier functions:
187262306a36Sopenharmony_ci
187362306a36Sopenharmony_ci (*) smp_store_mb(var, value)
187462306a36Sopenharmony_ci
187562306a36Sopenharmony_ci     This assigns the value to the variable and then inserts a full memory
187662306a36Sopenharmony_ci     barrier after it.  It isn't guaranteed to insert anything more than a
187762306a36Sopenharmony_ci     compiler barrier in a UP compilation.
187862306a36Sopenharmony_ci
187962306a36Sopenharmony_ci
188062306a36Sopenharmony_ci (*) smp_mb__before_atomic();
188162306a36Sopenharmony_ci (*) smp_mb__after_atomic();
188262306a36Sopenharmony_ci
188362306a36Sopenharmony_ci     These are for use with atomic RMW functions that do not imply memory
188462306a36Sopenharmony_ci     barriers, but where the code needs a memory barrier. Examples for atomic
188562306a36Sopenharmony_ci     RMW functions that do not imply a memory barrier are e.g. add,
188662306a36Sopenharmony_ci     subtract, (failed) conditional operations, _relaxed functions,
188762306a36Sopenharmony_ci     but not atomic_read or atomic_set. A common example where a memory
188862306a36Sopenharmony_ci     barrier may be required is when atomic ops are used for reference
188962306a36Sopenharmony_ci     counting.
189062306a36Sopenharmony_ci
189162306a36Sopenharmony_ci     These are also used for atomic RMW bitop functions that do not imply a
189262306a36Sopenharmony_ci     memory barrier (such as set_bit and clear_bit).
189362306a36Sopenharmony_ci
189462306a36Sopenharmony_ci     As an example, consider a piece of code that marks an object as being dead
189562306a36Sopenharmony_ci     and then decrements the object's reference count:
189662306a36Sopenharmony_ci
189762306a36Sopenharmony_ci	obj->dead = 1;
189862306a36Sopenharmony_ci	smp_mb__before_atomic();
189962306a36Sopenharmony_ci	atomic_dec(&obj->ref_count);
190062306a36Sopenharmony_ci
190162306a36Sopenharmony_ci     This makes sure that the death mark on the object is perceived to be set
190262306a36Sopenharmony_ci     *before* the reference counter is decremented.
190362306a36Sopenharmony_ci
190462306a36Sopenharmony_ci     See Documentation/atomic_{t,bitops}.txt for more information.
190562306a36Sopenharmony_ci
190662306a36Sopenharmony_ci
190762306a36Sopenharmony_ci (*) dma_wmb();
190862306a36Sopenharmony_ci (*) dma_rmb();
190962306a36Sopenharmony_ci (*) dma_mb();
191062306a36Sopenharmony_ci
191162306a36Sopenharmony_ci     These are for use with consistent memory to guarantee the ordering
191262306a36Sopenharmony_ci     of writes or reads of shared memory accessible to both the CPU and a
191362306a36Sopenharmony_ci     DMA capable device. See Documentation/core-api/dma-api.rst file for more
191462306a36Sopenharmony_ci     information about consistent memory.
191562306a36Sopenharmony_ci
191662306a36Sopenharmony_ci     For example, consider a device driver that shares memory with a device
191762306a36Sopenharmony_ci     and uses a descriptor status value to indicate if the descriptor belongs
191862306a36Sopenharmony_ci     to the device or the CPU, and a doorbell to notify it when new
191962306a36Sopenharmony_ci     descriptors are available:
192062306a36Sopenharmony_ci
192162306a36Sopenharmony_ci	if (desc->status != DEVICE_OWN) {
192262306a36Sopenharmony_ci		/* do not read data until we own descriptor */
192362306a36Sopenharmony_ci		dma_rmb();
192462306a36Sopenharmony_ci
192562306a36Sopenharmony_ci		/* read/modify data */
192662306a36Sopenharmony_ci		read_data = desc->data;
192762306a36Sopenharmony_ci		desc->data = write_data;
192862306a36Sopenharmony_ci
192962306a36Sopenharmony_ci		/* flush modifications before status update */
193062306a36Sopenharmony_ci		dma_wmb();
193162306a36Sopenharmony_ci
193262306a36Sopenharmony_ci		/* assign ownership */
193362306a36Sopenharmony_ci		desc->status = DEVICE_OWN;
193462306a36Sopenharmony_ci
193562306a36Sopenharmony_ci		/* Make descriptor status visible to the device followed by
193662306a36Sopenharmony_ci		 * notify device of new descriptor
193762306a36Sopenharmony_ci		 */
193862306a36Sopenharmony_ci		writel(DESC_NOTIFY, doorbell);
193962306a36Sopenharmony_ci	}
194062306a36Sopenharmony_ci
194162306a36Sopenharmony_ci     The dma_rmb() allows us to guarantee that the device has released ownership
194262306a36Sopenharmony_ci     before we read the data from the descriptor, and the dma_wmb() allows
194362306a36Sopenharmony_ci     us to guarantee the data is written to the descriptor before the device
194462306a36Sopenharmony_ci     can see it now has ownership.  The dma_mb() implies both a dma_rmb() and
194562306a36Sopenharmony_ci     a dma_wmb().
194662306a36Sopenharmony_ci
194762306a36Sopenharmony_ci     Note that the dma_*() barriers do not provide any ordering guarantees for
194862306a36Sopenharmony_ci     accesses to MMIO regions.  See the later "KERNEL I/O BARRIER EFFECTS"
194962306a36Sopenharmony_ci     subsection for more information about I/O accessors and MMIO ordering.
195062306a36Sopenharmony_ci
195162306a36Sopenharmony_ci (*) pmem_wmb();
195262306a36Sopenharmony_ci
195362306a36Sopenharmony_ci     This is for use with persistent memory to ensure that stores for which
195462306a36Sopenharmony_ci     modifications are written to persistent storage reached a platform
195562306a36Sopenharmony_ci     durability domain.
195662306a36Sopenharmony_ci
195762306a36Sopenharmony_ci     For example, after a non-temporal write to pmem region, we use pmem_wmb()
195862306a36Sopenharmony_ci     to ensure that stores have reached a platform durability domain. This ensures
195962306a36Sopenharmony_ci     that stores have updated persistent storage before any data access or
196062306a36Sopenharmony_ci     data transfer caused by subsequent instructions is initiated. This is
196162306a36Sopenharmony_ci     in addition to the ordering done by wmb().
196262306a36Sopenharmony_ci
196362306a36Sopenharmony_ci     For load from persistent memory, existing read memory barriers are sufficient
196462306a36Sopenharmony_ci     to ensure read ordering.
196562306a36Sopenharmony_ci
196662306a36Sopenharmony_ci (*) io_stop_wc();
196762306a36Sopenharmony_ci
196862306a36Sopenharmony_ci     For memory accesses with write-combining attributes (e.g. those returned
196962306a36Sopenharmony_ci     by ioremap_wc()), the CPU may wait for prior accesses to be merged with
197062306a36Sopenharmony_ci     subsequent ones. io_stop_wc() can be used to prevent the merging of
197162306a36Sopenharmony_ci     write-combining memory accesses before this macro with those after it when
197262306a36Sopenharmony_ci     such wait has performance implications.
197362306a36Sopenharmony_ci
197462306a36Sopenharmony_ci===============================
197562306a36Sopenharmony_ciIMPLICIT KERNEL MEMORY BARRIERS
197662306a36Sopenharmony_ci===============================
197762306a36Sopenharmony_ci
197862306a36Sopenharmony_ciSome of the other functions in the linux kernel imply memory barriers, amongst
197962306a36Sopenharmony_ciwhich are locking and scheduling functions.
198062306a36Sopenharmony_ci
198162306a36Sopenharmony_ciThis specification is a _minimum_ guarantee; any particular architecture may
198262306a36Sopenharmony_ciprovide more substantial guarantees, but these may not be relied upon outside
198362306a36Sopenharmony_ciof arch specific code.
198462306a36Sopenharmony_ci
198562306a36Sopenharmony_ci
198662306a36Sopenharmony_ciLOCK ACQUISITION FUNCTIONS
198762306a36Sopenharmony_ci--------------------------
198862306a36Sopenharmony_ci
198962306a36Sopenharmony_ciThe Linux kernel has a number of locking constructs:
199062306a36Sopenharmony_ci
199162306a36Sopenharmony_ci (*) spin locks
199262306a36Sopenharmony_ci (*) R/W spin locks
199362306a36Sopenharmony_ci (*) mutexes
199462306a36Sopenharmony_ci (*) semaphores
199562306a36Sopenharmony_ci (*) R/W semaphores
199662306a36Sopenharmony_ci
199762306a36Sopenharmony_ciIn all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
199862306a36Sopenharmony_cifor each construct.  These operations all imply certain barriers:
199962306a36Sopenharmony_ci
200062306a36Sopenharmony_ci (1) ACQUIRE operation implication:
200162306a36Sopenharmony_ci
200262306a36Sopenharmony_ci     Memory operations issued after the ACQUIRE will be completed after the
200362306a36Sopenharmony_ci     ACQUIRE operation has completed.
200462306a36Sopenharmony_ci
200562306a36Sopenharmony_ci     Memory operations issued before the ACQUIRE may be completed after
200662306a36Sopenharmony_ci     the ACQUIRE operation has completed.
200762306a36Sopenharmony_ci
200862306a36Sopenharmony_ci (2) RELEASE operation implication:
200962306a36Sopenharmony_ci
201062306a36Sopenharmony_ci     Memory operations issued before the RELEASE will be completed before the
201162306a36Sopenharmony_ci     RELEASE operation has completed.
201262306a36Sopenharmony_ci
201362306a36Sopenharmony_ci     Memory operations issued after the RELEASE may be completed before the
201462306a36Sopenharmony_ci     RELEASE operation has completed.
201562306a36Sopenharmony_ci
201662306a36Sopenharmony_ci (3) ACQUIRE vs ACQUIRE implication:
201762306a36Sopenharmony_ci
201862306a36Sopenharmony_ci     All ACQUIRE operations issued before another ACQUIRE operation will be
201962306a36Sopenharmony_ci     completed before that ACQUIRE operation.
202062306a36Sopenharmony_ci
202162306a36Sopenharmony_ci (4) ACQUIRE vs RELEASE implication:
202262306a36Sopenharmony_ci
202362306a36Sopenharmony_ci     All ACQUIRE operations issued before a RELEASE operation will be
202462306a36Sopenharmony_ci     completed before the RELEASE operation.
202562306a36Sopenharmony_ci
202662306a36Sopenharmony_ci (5) Failed conditional ACQUIRE implication:
202762306a36Sopenharmony_ci
202862306a36Sopenharmony_ci     Certain locking variants of the ACQUIRE operation may fail, either due to
202962306a36Sopenharmony_ci     being unable to get the lock immediately, or due to receiving an unblocked
203062306a36Sopenharmony_ci     signal while asleep waiting for the lock to become available.  Failed
203162306a36Sopenharmony_ci     locks do not imply any sort of barrier.
203262306a36Sopenharmony_ci
203362306a36Sopenharmony_ci[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
203462306a36Sopenharmony_cione-way barriers is that the effects of instructions outside of a critical
203562306a36Sopenharmony_cisection may seep into the inside of the critical section.
203662306a36Sopenharmony_ci
203762306a36Sopenharmony_ciAn ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
203862306a36Sopenharmony_cibecause it is possible for an access preceding the ACQUIRE to happen after the
203962306a36Sopenharmony_ciACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
204062306a36Sopenharmony_cithe two accesses can themselves then cross:
204162306a36Sopenharmony_ci
204262306a36Sopenharmony_ci	*A = a;
204362306a36Sopenharmony_ci	ACQUIRE M
204462306a36Sopenharmony_ci	RELEASE M
204562306a36Sopenharmony_ci	*B = b;
204662306a36Sopenharmony_ci
204762306a36Sopenharmony_cimay occur as:
204862306a36Sopenharmony_ci
204962306a36Sopenharmony_ci	ACQUIRE M, STORE *B, STORE *A, RELEASE M
205062306a36Sopenharmony_ci
205162306a36Sopenharmony_ciWhen the ACQUIRE and RELEASE are a lock acquisition and release,
205262306a36Sopenharmony_cirespectively, this same reordering can occur if the lock's ACQUIRE and
205362306a36Sopenharmony_ciRELEASE are to the same lock variable, but only from the perspective of
205462306a36Sopenharmony_cianother CPU not holding that lock.  In short, a ACQUIRE followed by an
205562306a36Sopenharmony_ciRELEASE may -not- be assumed to be a full memory barrier.
205662306a36Sopenharmony_ci
205762306a36Sopenharmony_ciSimilarly, the reverse case of a RELEASE followed by an ACQUIRE does
205862306a36Sopenharmony_cinot imply a full memory barrier.  Therefore, the CPU's execution of the
205962306a36Sopenharmony_cicritical sections corresponding to the RELEASE and the ACQUIRE can cross,
206062306a36Sopenharmony_ciso that:
206162306a36Sopenharmony_ci
206262306a36Sopenharmony_ci	*A = a;
206362306a36Sopenharmony_ci	RELEASE M
206462306a36Sopenharmony_ci	ACQUIRE N
206562306a36Sopenharmony_ci	*B = b;
206662306a36Sopenharmony_ci
206762306a36Sopenharmony_cicould occur as:
206862306a36Sopenharmony_ci
206962306a36Sopenharmony_ci	ACQUIRE N, STORE *B, STORE *A, RELEASE M
207062306a36Sopenharmony_ci
207162306a36Sopenharmony_ciIt might appear that this reordering could introduce a deadlock.
207262306a36Sopenharmony_ciHowever, this cannot happen because if such a deadlock threatened,
207362306a36Sopenharmony_cithe RELEASE would simply complete, thereby avoiding the deadlock.
207462306a36Sopenharmony_ci
207562306a36Sopenharmony_ci	Why does this work?
207662306a36Sopenharmony_ci
207762306a36Sopenharmony_ci	One key point is that we are only talking about the CPU doing
207862306a36Sopenharmony_ci	the reordering, not the compiler.  If the compiler (or, for
207962306a36Sopenharmony_ci	that matter, the developer) switched the operations, deadlock
208062306a36Sopenharmony_ci	-could- occur.
208162306a36Sopenharmony_ci
208262306a36Sopenharmony_ci	But suppose the CPU reordered the operations.  In this case,
208362306a36Sopenharmony_ci	the unlock precedes the lock in the assembly code.  The CPU
208462306a36Sopenharmony_ci	simply elected to try executing the later lock operation first.
208562306a36Sopenharmony_ci	If there is a deadlock, this lock operation will simply spin (or
208662306a36Sopenharmony_ci	try to sleep, but more on that later).	The CPU will eventually
208762306a36Sopenharmony_ci	execute the unlock operation (which preceded the lock operation
208862306a36Sopenharmony_ci	in the assembly code), which will unravel the potential deadlock,
208962306a36Sopenharmony_ci	allowing the lock operation to succeed.
209062306a36Sopenharmony_ci
209162306a36Sopenharmony_ci	But what if the lock is a sleeplock?  In that case, the code will
209262306a36Sopenharmony_ci	try to enter the scheduler, where it will eventually encounter
209362306a36Sopenharmony_ci	a memory barrier, which will force the earlier unlock operation
209462306a36Sopenharmony_ci	to complete, again unraveling the deadlock.  There might be
209562306a36Sopenharmony_ci	a sleep-unlock race, but the locking primitive needs to resolve
209662306a36Sopenharmony_ci	such races properly in any case.
209762306a36Sopenharmony_ci
209862306a36Sopenharmony_ciLocks and semaphores may not provide any guarantee of ordering on UP compiled
209962306a36Sopenharmony_cisystems, and so cannot be counted on in such a situation to actually achieve
210062306a36Sopenharmony_cianything at all - especially with respect to I/O accesses - unless combined
210162306a36Sopenharmony_ciwith interrupt disabling operations.
210262306a36Sopenharmony_ci
210362306a36Sopenharmony_ciSee also the section on "Inter-CPU acquiring barrier effects".
210462306a36Sopenharmony_ci
210562306a36Sopenharmony_ci
210662306a36Sopenharmony_ciAs an example, consider the following:
210762306a36Sopenharmony_ci
210862306a36Sopenharmony_ci	*A = a;
210962306a36Sopenharmony_ci	*B = b;
211062306a36Sopenharmony_ci	ACQUIRE
211162306a36Sopenharmony_ci	*C = c;
211262306a36Sopenharmony_ci	*D = d;
211362306a36Sopenharmony_ci	RELEASE
211462306a36Sopenharmony_ci	*E = e;
211562306a36Sopenharmony_ci	*F = f;
211662306a36Sopenharmony_ci
211762306a36Sopenharmony_ciThe following sequence of events is acceptable:
211862306a36Sopenharmony_ci
211962306a36Sopenharmony_ci	ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
212062306a36Sopenharmony_ci
212162306a36Sopenharmony_ci	[+] Note that {*F,*A} indicates a combined access.
212262306a36Sopenharmony_ci
212362306a36Sopenharmony_ciBut none of the following are:
212462306a36Sopenharmony_ci
212562306a36Sopenharmony_ci	{*F,*A}, *B,	ACQUIRE, *C, *D,	RELEASE, *E
212662306a36Sopenharmony_ci	*A, *B, *C,	ACQUIRE, *D,		RELEASE, *E, *F
212762306a36Sopenharmony_ci	*A, *B,		ACQUIRE, *C,		RELEASE, *D, *E, *F
212862306a36Sopenharmony_ci	*B,		ACQUIRE, *C, *D,	RELEASE, {*F,*A}, *E
212962306a36Sopenharmony_ci
213062306a36Sopenharmony_ci
213162306a36Sopenharmony_ci
213262306a36Sopenharmony_ciINTERRUPT DISABLING FUNCTIONS
213362306a36Sopenharmony_ci-----------------------------
213462306a36Sopenharmony_ci
213562306a36Sopenharmony_ciFunctions that disable interrupts (ACQUIRE equivalent) and enable interrupts
213662306a36Sopenharmony_ci(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
213762306a36Sopenharmony_cibarriers are required in such a situation, they must be provided from some
213862306a36Sopenharmony_ciother means.
213962306a36Sopenharmony_ci
214062306a36Sopenharmony_ci
214162306a36Sopenharmony_ciSLEEP AND WAKE-UP FUNCTIONS
214262306a36Sopenharmony_ci---------------------------
214362306a36Sopenharmony_ci
214462306a36Sopenharmony_ciSleeping and waking on an event flagged in global data can be viewed as an
214562306a36Sopenharmony_ciinteraction between two pieces of data: the task state of the task waiting for
214662306a36Sopenharmony_cithe event and the global data used to indicate the event.  To make sure that
214762306a36Sopenharmony_cithese appear to happen in the right order, the primitives to begin the process
214862306a36Sopenharmony_ciof going to sleep, and the primitives to initiate a wake up imply certain
214962306a36Sopenharmony_cibarriers.
215062306a36Sopenharmony_ci
215162306a36Sopenharmony_ciFirstly, the sleeper normally follows something like this sequence of events:
215262306a36Sopenharmony_ci
215362306a36Sopenharmony_ci	for (;;) {
215462306a36Sopenharmony_ci		set_current_state(TASK_UNINTERRUPTIBLE);
215562306a36Sopenharmony_ci		if (event_indicated)
215662306a36Sopenharmony_ci			break;
215762306a36Sopenharmony_ci		schedule();
215862306a36Sopenharmony_ci	}
215962306a36Sopenharmony_ci
216062306a36Sopenharmony_ciA general memory barrier is interpolated automatically by set_current_state()
216162306a36Sopenharmony_ciafter it has altered the task state:
216262306a36Sopenharmony_ci
216362306a36Sopenharmony_ci	CPU 1
216462306a36Sopenharmony_ci	===============================
216562306a36Sopenharmony_ci	set_current_state();
216662306a36Sopenharmony_ci	  smp_store_mb();
216762306a36Sopenharmony_ci	    STORE current->state
216862306a36Sopenharmony_ci	    <general barrier>
216962306a36Sopenharmony_ci	LOAD event_indicated
217062306a36Sopenharmony_ci
217162306a36Sopenharmony_ciset_current_state() may be wrapped by:
217262306a36Sopenharmony_ci
217362306a36Sopenharmony_ci	prepare_to_wait();
217462306a36Sopenharmony_ci	prepare_to_wait_exclusive();
217562306a36Sopenharmony_ci
217662306a36Sopenharmony_ciwhich therefore also imply a general memory barrier after setting the state.
217762306a36Sopenharmony_ciThe whole sequence above is available in various canned forms, all of which
217862306a36Sopenharmony_ciinterpolate the memory barrier in the right place:
217962306a36Sopenharmony_ci
218062306a36Sopenharmony_ci	wait_event();
218162306a36Sopenharmony_ci	wait_event_interruptible();
218262306a36Sopenharmony_ci	wait_event_interruptible_exclusive();
218362306a36Sopenharmony_ci	wait_event_interruptible_timeout();
218462306a36Sopenharmony_ci	wait_event_killable();
218562306a36Sopenharmony_ci	wait_event_timeout();
218662306a36Sopenharmony_ci	wait_on_bit();
218762306a36Sopenharmony_ci	wait_on_bit_lock();
218862306a36Sopenharmony_ci
218962306a36Sopenharmony_ci
219062306a36Sopenharmony_ciSecondly, code that performs a wake up normally follows something like this:
219162306a36Sopenharmony_ci
219262306a36Sopenharmony_ci	event_indicated = 1;
219362306a36Sopenharmony_ci	wake_up(&event_wait_queue);
219462306a36Sopenharmony_ci
219562306a36Sopenharmony_cior:
219662306a36Sopenharmony_ci
219762306a36Sopenharmony_ci	event_indicated = 1;
219862306a36Sopenharmony_ci	wake_up_process(event_daemon);
219962306a36Sopenharmony_ci
220062306a36Sopenharmony_ciA general memory barrier is executed by wake_up() if it wakes something up.
220162306a36Sopenharmony_ciIf it doesn't wake anything up then a memory barrier may or may not be
220262306a36Sopenharmony_ciexecuted; you must not rely on it.  The barrier occurs before the task state
220362306a36Sopenharmony_ciis accessed, in particular, it sits between the STORE to indicate the event
220462306a36Sopenharmony_ciand the STORE to set TASK_RUNNING:
220562306a36Sopenharmony_ci
220662306a36Sopenharmony_ci	CPU 1 (Sleeper)			CPU 2 (Waker)
220762306a36Sopenharmony_ci	===============================	===============================
220862306a36Sopenharmony_ci	set_current_state();		STORE event_indicated
220962306a36Sopenharmony_ci	  smp_store_mb();		wake_up();
221062306a36Sopenharmony_ci	    STORE current->state	  ...
221162306a36Sopenharmony_ci	    <general barrier>		  <general barrier>
221262306a36Sopenharmony_ci	LOAD event_indicated		  if ((LOAD task->state) & TASK_NORMAL)
221362306a36Sopenharmony_ci					    STORE task->state
221462306a36Sopenharmony_ci
221562306a36Sopenharmony_ciwhere "task" is the thread being woken up and it equals CPU 1's "current".
221662306a36Sopenharmony_ci
221762306a36Sopenharmony_ciTo repeat, a general memory barrier is guaranteed to be executed by wake_up()
221862306a36Sopenharmony_ciif something is actually awakened, but otherwise there is no such guarantee.
221962306a36Sopenharmony_ciTo see this, consider the following sequence of events, where X and Y are both
222062306a36Sopenharmony_ciinitially zero:
222162306a36Sopenharmony_ci
222262306a36Sopenharmony_ci	CPU 1				CPU 2
222362306a36Sopenharmony_ci	===============================	===============================
222462306a36Sopenharmony_ci	X = 1;				Y = 1;
222562306a36Sopenharmony_ci	smp_mb();			wake_up();
222662306a36Sopenharmony_ci	LOAD Y				LOAD X
222762306a36Sopenharmony_ci
222862306a36Sopenharmony_ciIf a wakeup does occur, one (at least) of the two loads must see 1.  If, on
222962306a36Sopenharmony_cithe other hand, a wakeup does not occur, both loads might see 0.
223062306a36Sopenharmony_ci
223162306a36Sopenharmony_ciwake_up_process() always executes a general memory barrier.  The barrier again
223262306a36Sopenharmony_cioccurs before the task state is accessed.  In particular, if the wake_up() in
223362306a36Sopenharmony_cithe previous snippet were replaced by a call to wake_up_process() then one of
223462306a36Sopenharmony_cithe two loads would be guaranteed to see 1.
223562306a36Sopenharmony_ci
223662306a36Sopenharmony_ciThe available waker functions include:
223762306a36Sopenharmony_ci
223862306a36Sopenharmony_ci	complete();
223962306a36Sopenharmony_ci	wake_up();
224062306a36Sopenharmony_ci	wake_up_all();
224162306a36Sopenharmony_ci	wake_up_bit();
224262306a36Sopenharmony_ci	wake_up_interruptible();
224362306a36Sopenharmony_ci	wake_up_interruptible_all();
224462306a36Sopenharmony_ci	wake_up_interruptible_nr();
224562306a36Sopenharmony_ci	wake_up_interruptible_poll();
224662306a36Sopenharmony_ci	wake_up_interruptible_sync();
224762306a36Sopenharmony_ci	wake_up_interruptible_sync_poll();
224862306a36Sopenharmony_ci	wake_up_locked();
224962306a36Sopenharmony_ci	wake_up_locked_poll();
225062306a36Sopenharmony_ci	wake_up_nr();
225162306a36Sopenharmony_ci	wake_up_poll();
225262306a36Sopenharmony_ci	wake_up_process();
225362306a36Sopenharmony_ci
225462306a36Sopenharmony_ciIn terms of memory ordering, these functions all provide the same guarantees of
225562306a36Sopenharmony_cia wake_up() (or stronger).
225662306a36Sopenharmony_ci
225762306a36Sopenharmony_ci[!] Note that the memory barriers implied by the sleeper and the waker do _not_
225862306a36Sopenharmony_ciorder multiple stores before the wake-up with respect to loads of those stored
225962306a36Sopenharmony_civalues after the sleeper has called set_current_state().  For instance, if the
226062306a36Sopenharmony_cisleeper does:
226162306a36Sopenharmony_ci
226262306a36Sopenharmony_ci	set_current_state(TASK_INTERRUPTIBLE);
226362306a36Sopenharmony_ci	if (event_indicated)
226462306a36Sopenharmony_ci		break;
226562306a36Sopenharmony_ci	__set_current_state(TASK_RUNNING);
226662306a36Sopenharmony_ci	do_something(my_data);
226762306a36Sopenharmony_ci
226862306a36Sopenharmony_ciand the waker does:
226962306a36Sopenharmony_ci
227062306a36Sopenharmony_ci	my_data = value;
227162306a36Sopenharmony_ci	event_indicated = 1;
227262306a36Sopenharmony_ci	wake_up(&event_wait_queue);
227362306a36Sopenharmony_ci
227462306a36Sopenharmony_cithere's no guarantee that the change to event_indicated will be perceived by
227562306a36Sopenharmony_cithe sleeper as coming after the change to my_data.  In such a circumstance, the
227662306a36Sopenharmony_cicode on both sides must interpolate its own memory barriers between the
227762306a36Sopenharmony_ciseparate data accesses.  Thus the above sleeper ought to do:
227862306a36Sopenharmony_ci
227962306a36Sopenharmony_ci	set_current_state(TASK_INTERRUPTIBLE);
228062306a36Sopenharmony_ci	if (event_indicated) {
228162306a36Sopenharmony_ci		smp_rmb();
228262306a36Sopenharmony_ci		do_something(my_data);
228362306a36Sopenharmony_ci	}
228462306a36Sopenharmony_ci
228562306a36Sopenharmony_ciand the waker should do:
228662306a36Sopenharmony_ci
228762306a36Sopenharmony_ci	my_data = value;
228862306a36Sopenharmony_ci	smp_wmb();
228962306a36Sopenharmony_ci	event_indicated = 1;
229062306a36Sopenharmony_ci	wake_up(&event_wait_queue);
229162306a36Sopenharmony_ci
229262306a36Sopenharmony_ci
229362306a36Sopenharmony_ciMISCELLANEOUS FUNCTIONS
229462306a36Sopenharmony_ci-----------------------
229562306a36Sopenharmony_ci
229662306a36Sopenharmony_ciOther functions that imply barriers:
229762306a36Sopenharmony_ci
229862306a36Sopenharmony_ci (*) schedule() and similar imply full memory barriers.
229962306a36Sopenharmony_ci
230062306a36Sopenharmony_ci
230162306a36Sopenharmony_ci===================================
230262306a36Sopenharmony_ciINTER-CPU ACQUIRING BARRIER EFFECTS
230362306a36Sopenharmony_ci===================================
230462306a36Sopenharmony_ci
230562306a36Sopenharmony_ciOn SMP systems locking primitives give a more substantial form of barrier: one
230662306a36Sopenharmony_cithat does affect memory access ordering on other CPUs, within the context of
230762306a36Sopenharmony_ciconflict on any particular lock.
230862306a36Sopenharmony_ci
230962306a36Sopenharmony_ci
231062306a36Sopenharmony_ciACQUIRES VS MEMORY ACCESSES
231162306a36Sopenharmony_ci---------------------------
231262306a36Sopenharmony_ci
231362306a36Sopenharmony_ciConsider the following: the system has a pair of spinlocks (M) and (Q), and
231462306a36Sopenharmony_cithree CPUs; then should the following sequence of events occur:
231562306a36Sopenharmony_ci
231662306a36Sopenharmony_ci	CPU 1				CPU 2
231762306a36Sopenharmony_ci	===============================	===============================
231862306a36Sopenharmony_ci	WRITE_ONCE(*A, a);		WRITE_ONCE(*E, e);
231962306a36Sopenharmony_ci	ACQUIRE M			ACQUIRE Q
232062306a36Sopenharmony_ci	WRITE_ONCE(*B, b);		WRITE_ONCE(*F, f);
232162306a36Sopenharmony_ci	WRITE_ONCE(*C, c);		WRITE_ONCE(*G, g);
232262306a36Sopenharmony_ci	RELEASE M			RELEASE Q
232362306a36Sopenharmony_ci	WRITE_ONCE(*D, d);		WRITE_ONCE(*H, h);
232462306a36Sopenharmony_ci
232562306a36Sopenharmony_ciThen there is no guarantee as to what order CPU 3 will see the accesses to *A
232662306a36Sopenharmony_cithrough *H occur in, other than the constraints imposed by the separate locks
232762306a36Sopenharmony_cion the separate CPUs.  It might, for example, see:
232862306a36Sopenharmony_ci
232962306a36Sopenharmony_ci	*E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
233062306a36Sopenharmony_ci
233162306a36Sopenharmony_ciBut it won't see any of:
233262306a36Sopenharmony_ci
233362306a36Sopenharmony_ci	*B, *C or *D preceding ACQUIRE M
233462306a36Sopenharmony_ci	*A, *B or *C following RELEASE M
233562306a36Sopenharmony_ci	*F, *G or *H preceding ACQUIRE Q
233662306a36Sopenharmony_ci	*E, *F or *G following RELEASE Q
233762306a36Sopenharmony_ci
233862306a36Sopenharmony_ci
233962306a36Sopenharmony_ci=================================
234062306a36Sopenharmony_ciWHERE ARE MEMORY BARRIERS NEEDED?
234162306a36Sopenharmony_ci=================================
234262306a36Sopenharmony_ci
234362306a36Sopenharmony_ciUnder normal operation, memory operation reordering is generally not going to
234462306a36Sopenharmony_cibe a problem as a single-threaded linear piece of code will still appear to
234562306a36Sopenharmony_ciwork correctly, even if it's in an SMP kernel.  There are, however, four
234662306a36Sopenharmony_cicircumstances in which reordering definitely _could_ be a problem:
234762306a36Sopenharmony_ci
234862306a36Sopenharmony_ci (*) Interprocessor interaction.
234962306a36Sopenharmony_ci
235062306a36Sopenharmony_ci (*) Atomic operations.
235162306a36Sopenharmony_ci
235262306a36Sopenharmony_ci (*) Accessing devices.
235362306a36Sopenharmony_ci
235462306a36Sopenharmony_ci (*) Interrupts.
235562306a36Sopenharmony_ci
235662306a36Sopenharmony_ci
235762306a36Sopenharmony_ciINTERPROCESSOR INTERACTION
235862306a36Sopenharmony_ci--------------------------
235962306a36Sopenharmony_ci
236062306a36Sopenharmony_ciWhen there's a system with more than one processor, more than one CPU in the
236162306a36Sopenharmony_cisystem may be working on the same data set at the same time.  This can cause
236262306a36Sopenharmony_cisynchronisation problems, and the usual way of dealing with them is to use
236362306a36Sopenharmony_cilocks.  Locks, however, are quite expensive, and so it may be preferable to
236462306a36Sopenharmony_cioperate without the use of a lock if at all possible.  In such a case
236562306a36Sopenharmony_cioperations that affect both CPUs may have to be carefully ordered to prevent
236662306a36Sopenharmony_cia malfunction.
236762306a36Sopenharmony_ci
236862306a36Sopenharmony_ciConsider, for example, the R/W semaphore slow path.  Here a waiting process is
236962306a36Sopenharmony_ciqueued on the semaphore, by virtue of it having a piece of its stack linked to
237062306a36Sopenharmony_cithe semaphore's list of waiting processes:
237162306a36Sopenharmony_ci
237262306a36Sopenharmony_ci	struct rw_semaphore {
237362306a36Sopenharmony_ci		...
237462306a36Sopenharmony_ci		spinlock_t lock;
237562306a36Sopenharmony_ci		struct list_head waiters;
237662306a36Sopenharmony_ci	};
237762306a36Sopenharmony_ci
237862306a36Sopenharmony_ci	struct rwsem_waiter {
237962306a36Sopenharmony_ci		struct list_head list;
238062306a36Sopenharmony_ci		struct task_struct *task;
238162306a36Sopenharmony_ci	};
238262306a36Sopenharmony_ci
238362306a36Sopenharmony_ciTo wake up a particular waiter, the up_read() or up_write() functions have to:
238462306a36Sopenharmony_ci
238562306a36Sopenharmony_ci (1) read the next pointer from this waiter's record to know as to where the
238662306a36Sopenharmony_ci     next waiter record is;
238762306a36Sopenharmony_ci
238862306a36Sopenharmony_ci (2) read the pointer to the waiter's task structure;
238962306a36Sopenharmony_ci
239062306a36Sopenharmony_ci (3) clear the task pointer to tell the waiter it has been given the semaphore;
239162306a36Sopenharmony_ci
239262306a36Sopenharmony_ci (4) call wake_up_process() on the task; and
239362306a36Sopenharmony_ci
239462306a36Sopenharmony_ci (5) release the reference held on the waiter's task struct.
239562306a36Sopenharmony_ci
239662306a36Sopenharmony_ciIn other words, it has to perform this sequence of events:
239762306a36Sopenharmony_ci
239862306a36Sopenharmony_ci	LOAD waiter->list.next;
239962306a36Sopenharmony_ci	LOAD waiter->task;
240062306a36Sopenharmony_ci	STORE waiter->task;
240162306a36Sopenharmony_ci	CALL wakeup
240262306a36Sopenharmony_ci	RELEASE task
240362306a36Sopenharmony_ci
240462306a36Sopenharmony_ciand if any of these steps occur out of order, then the whole thing may
240562306a36Sopenharmony_cimalfunction.
240662306a36Sopenharmony_ci
240762306a36Sopenharmony_ciOnce it has queued itself and dropped the semaphore lock, the waiter does not
240862306a36Sopenharmony_ciget the lock again; it instead just waits for its task pointer to be cleared
240962306a36Sopenharmony_cibefore proceeding.  Since the record is on the waiter's stack, this means that
241062306a36Sopenharmony_ciif the task pointer is cleared _before_ the next pointer in the list is read,
241162306a36Sopenharmony_cianother CPU might start processing the waiter and might clobber the waiter's
241262306a36Sopenharmony_cistack before the up*() function has a chance to read the next pointer.
241362306a36Sopenharmony_ci
241462306a36Sopenharmony_ciConsider then what might happen to the above sequence of events:
241562306a36Sopenharmony_ci
241662306a36Sopenharmony_ci	CPU 1				CPU 2
241762306a36Sopenharmony_ci	===============================	===============================
241862306a36Sopenharmony_ci					down_xxx()
241962306a36Sopenharmony_ci					Queue waiter
242062306a36Sopenharmony_ci					Sleep
242162306a36Sopenharmony_ci	up_yyy()
242262306a36Sopenharmony_ci	LOAD waiter->task;
242362306a36Sopenharmony_ci	STORE waiter->task;
242462306a36Sopenharmony_ci					Woken up by other event
242562306a36Sopenharmony_ci	<preempt>
242662306a36Sopenharmony_ci					Resume processing
242762306a36Sopenharmony_ci					down_xxx() returns
242862306a36Sopenharmony_ci					call foo()
242962306a36Sopenharmony_ci					foo() clobbers *waiter
243062306a36Sopenharmony_ci	</preempt>
243162306a36Sopenharmony_ci	LOAD waiter->list.next;
243262306a36Sopenharmony_ci	--- OOPS ---
243362306a36Sopenharmony_ci
243462306a36Sopenharmony_ciThis could be dealt with using the semaphore lock, but then the down_xxx()
243562306a36Sopenharmony_cifunction has to needlessly get the spinlock again after being woken up.
243662306a36Sopenharmony_ci
243762306a36Sopenharmony_ciThe way to deal with this is to insert a general SMP memory barrier:
243862306a36Sopenharmony_ci
243962306a36Sopenharmony_ci	LOAD waiter->list.next;
244062306a36Sopenharmony_ci	LOAD waiter->task;
244162306a36Sopenharmony_ci	smp_mb();
244262306a36Sopenharmony_ci	STORE waiter->task;
244362306a36Sopenharmony_ci	CALL wakeup
244462306a36Sopenharmony_ci	RELEASE task
244562306a36Sopenharmony_ci
244662306a36Sopenharmony_ciIn this case, the barrier makes a guarantee that all memory accesses before the
244762306a36Sopenharmony_cibarrier will appear to happen before all the memory accesses after the barrier
244862306a36Sopenharmony_ciwith respect to the other CPUs on the system.  It does _not_ guarantee that all
244962306a36Sopenharmony_cithe memory accesses before the barrier will be complete by the time the barrier
245062306a36Sopenharmony_ciinstruction itself is complete.
245162306a36Sopenharmony_ci
245262306a36Sopenharmony_ciOn a UP system - where this wouldn't be a problem - the smp_mb() is just a
245362306a36Sopenharmony_cicompiler barrier, thus making sure the compiler emits the instructions in the
245462306a36Sopenharmony_ciright order without actually intervening in the CPU.  Since there's only one
245562306a36Sopenharmony_ciCPU, that CPU's dependency ordering logic will take care of everything else.
245662306a36Sopenharmony_ci
245762306a36Sopenharmony_ci
245862306a36Sopenharmony_ciATOMIC OPERATIONS
245962306a36Sopenharmony_ci-----------------
246062306a36Sopenharmony_ci
246162306a36Sopenharmony_ciWhile they are technically interprocessor interaction considerations, atomic
246262306a36Sopenharmony_cioperations are noted specially as some of them imply full memory barriers and
246362306a36Sopenharmony_cisome don't, but they're very heavily relied on as a group throughout the
246462306a36Sopenharmony_cikernel.
246562306a36Sopenharmony_ci
246662306a36Sopenharmony_ciSee Documentation/atomic_t.txt for more information.
246762306a36Sopenharmony_ci
246862306a36Sopenharmony_ci
246962306a36Sopenharmony_ciACCESSING DEVICES
247062306a36Sopenharmony_ci-----------------
247162306a36Sopenharmony_ci
247262306a36Sopenharmony_ciMany devices can be memory mapped, and so appear to the CPU as if they're just
247362306a36Sopenharmony_cia set of memory locations.  To control such a device, the driver usually has to
247462306a36Sopenharmony_cimake the right memory accesses in exactly the right order.
247562306a36Sopenharmony_ci
247662306a36Sopenharmony_ciHowever, having a clever CPU or a clever compiler creates a potential problem
247762306a36Sopenharmony_ciin that the carefully sequenced accesses in the driver code won't reach the
247862306a36Sopenharmony_cidevice in the requisite order if the CPU or the compiler thinks it is more
247962306a36Sopenharmony_ciefficient to reorder, combine or merge accesses - something that would cause
248062306a36Sopenharmony_cithe device to malfunction.
248162306a36Sopenharmony_ci
248262306a36Sopenharmony_ciInside of the Linux kernel, I/O should be done through the appropriate accessor
248362306a36Sopenharmony_ciroutines - such as inb() or writel() - which know how to make such accesses
248462306a36Sopenharmony_ciappropriately sequential.  While this, for the most part, renders the explicit
248562306a36Sopenharmony_ciuse of memory barriers unnecessary, if the accessor functions are used to refer
248662306a36Sopenharmony_cito an I/O memory window with relaxed memory access properties, then _mandatory_
248762306a36Sopenharmony_cimemory barriers are required to enforce ordering.
248862306a36Sopenharmony_ci
248962306a36Sopenharmony_ciSee Documentation/driver-api/device-io.rst for more information.
249062306a36Sopenharmony_ci
249162306a36Sopenharmony_ci
249262306a36Sopenharmony_ciINTERRUPTS
249362306a36Sopenharmony_ci----------
249462306a36Sopenharmony_ci
249562306a36Sopenharmony_ciA driver may be interrupted by its own interrupt service routine, and thus the
249662306a36Sopenharmony_citwo parts of the driver may interfere with each other's attempts to control or
249762306a36Sopenharmony_ciaccess the device.
249862306a36Sopenharmony_ci
249962306a36Sopenharmony_ciThis may be alleviated - at least in part - by disabling local interrupts (a
250062306a36Sopenharmony_ciform of locking), such that the critical operations are all contained within
250162306a36Sopenharmony_cithe interrupt-disabled section in the driver.  While the driver's interrupt
250262306a36Sopenharmony_ciroutine is executing, the driver's core may not run on the same CPU, and its
250362306a36Sopenharmony_ciinterrupt is not permitted to happen again until the current interrupt has been
250462306a36Sopenharmony_cihandled, thus the interrupt handler does not need to lock against that.
250562306a36Sopenharmony_ci
250662306a36Sopenharmony_ciHowever, consider a driver that was talking to an ethernet card that sports an
250762306a36Sopenharmony_ciaddress register and a data register.  If that driver's core talks to the card
250862306a36Sopenharmony_ciunder interrupt-disablement and then the driver's interrupt handler is invoked:
250962306a36Sopenharmony_ci
251062306a36Sopenharmony_ci	LOCAL IRQ DISABLE
251162306a36Sopenharmony_ci	writew(ADDR, 3);
251262306a36Sopenharmony_ci	writew(DATA, y);
251362306a36Sopenharmony_ci	LOCAL IRQ ENABLE
251462306a36Sopenharmony_ci	<interrupt>
251562306a36Sopenharmony_ci	writew(ADDR, 4);
251662306a36Sopenharmony_ci	q = readw(DATA);
251762306a36Sopenharmony_ci	</interrupt>
251862306a36Sopenharmony_ci
251962306a36Sopenharmony_ciThe store to the data register might happen after the second store to the
252062306a36Sopenharmony_ciaddress register if ordering rules are sufficiently relaxed:
252162306a36Sopenharmony_ci
252262306a36Sopenharmony_ci	STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
252362306a36Sopenharmony_ci
252462306a36Sopenharmony_ci
252562306a36Sopenharmony_ciIf ordering rules are relaxed, it must be assumed that accesses done inside an
252662306a36Sopenharmony_ciinterrupt disabled section may leak outside of it and may interleave with
252762306a36Sopenharmony_ciaccesses performed in an interrupt - and vice versa - unless implicit or
252862306a36Sopenharmony_ciexplicit barriers are used.
252962306a36Sopenharmony_ci
253062306a36Sopenharmony_ciNormally this won't be a problem because the I/O accesses done inside such
253162306a36Sopenharmony_cisections will include synchronous load operations on strictly ordered I/O
253262306a36Sopenharmony_ciregisters that form implicit I/O barriers.
253362306a36Sopenharmony_ci
253462306a36Sopenharmony_ci
253562306a36Sopenharmony_ciA similar situation may occur between an interrupt routine and two routines
253662306a36Sopenharmony_cirunning on separate CPUs that communicate with each other.  If such a case is
253762306a36Sopenharmony_cilikely, then interrupt-disabling locks should be used to guarantee ordering.
253862306a36Sopenharmony_ci
253962306a36Sopenharmony_ci
254062306a36Sopenharmony_ci==========================
254162306a36Sopenharmony_ciKERNEL I/O BARRIER EFFECTS
254262306a36Sopenharmony_ci==========================
254362306a36Sopenharmony_ci
254462306a36Sopenharmony_ciInterfacing with peripherals via I/O accesses is deeply architecture and device
254562306a36Sopenharmony_cispecific. Therefore, drivers which are inherently non-portable may rely on
254662306a36Sopenharmony_cispecific behaviours of their target systems in order to achieve synchronization
254762306a36Sopenharmony_ciin the most lightweight manner possible. For drivers intending to be portable
254862306a36Sopenharmony_cibetween multiple architectures and bus implementations, the kernel offers a
254962306a36Sopenharmony_ciseries of accessor functions that provide various degrees of ordering
255062306a36Sopenharmony_ciguarantees:
255162306a36Sopenharmony_ci
255262306a36Sopenharmony_ci (*) readX(), writeX():
255362306a36Sopenharmony_ci
255462306a36Sopenharmony_ci	The readX() and writeX() MMIO accessors take a pointer to the
255562306a36Sopenharmony_ci	peripheral being accessed as an __iomem * parameter. For pointers
255662306a36Sopenharmony_ci	mapped with the default I/O attributes (e.g. those returned by
255762306a36Sopenharmony_ci	ioremap()), the ordering guarantees are as follows:
255862306a36Sopenharmony_ci
255962306a36Sopenharmony_ci	1. All readX() and writeX() accesses to the same peripheral are ordered
256062306a36Sopenharmony_ci	   with respect to each other. This ensures that MMIO register accesses
256162306a36Sopenharmony_ci	   by the same CPU thread to a particular device will arrive in program
256262306a36Sopenharmony_ci	   order.
256362306a36Sopenharmony_ci
256462306a36Sopenharmony_ci	2. A writeX() issued by a CPU thread holding a spinlock is ordered
256562306a36Sopenharmony_ci	   before a writeX() to the same peripheral from another CPU thread
256662306a36Sopenharmony_ci	   issued after a later acquisition of the same spinlock. This ensures
256762306a36Sopenharmony_ci	   that MMIO register writes to a particular device issued while holding
256862306a36Sopenharmony_ci	   a spinlock will arrive in an order consistent with acquisitions of
256962306a36Sopenharmony_ci	   the lock.
257062306a36Sopenharmony_ci
257162306a36Sopenharmony_ci	3. A writeX() by a CPU thread to the peripheral will first wait for the
257262306a36Sopenharmony_ci	   completion of all prior writes to memory either issued by, or
257362306a36Sopenharmony_ci	   propagated to, the same thread. This ensures that writes by the CPU
257462306a36Sopenharmony_ci	   to an outbound DMA buffer allocated by dma_alloc_coherent() will be
257562306a36Sopenharmony_ci	   visible to a DMA engine when the CPU writes to its MMIO control
257662306a36Sopenharmony_ci	   register to trigger the transfer.
257762306a36Sopenharmony_ci
257862306a36Sopenharmony_ci	4. A readX() by a CPU thread from the peripheral will complete before
257962306a36Sopenharmony_ci	   any subsequent reads from memory by the same thread can begin. This
258062306a36Sopenharmony_ci	   ensures that reads by the CPU from an incoming DMA buffer allocated
258162306a36Sopenharmony_ci	   by dma_alloc_coherent() will not see stale data after reading from
258262306a36Sopenharmony_ci	   the DMA engine's MMIO status register to establish that the DMA
258362306a36Sopenharmony_ci	   transfer has completed.
258462306a36Sopenharmony_ci
258562306a36Sopenharmony_ci	5. A readX() by a CPU thread from the peripheral will complete before
258662306a36Sopenharmony_ci	   any subsequent delay() loop can begin execution on the same thread.
258762306a36Sopenharmony_ci	   This ensures that two MMIO register writes by the CPU to a peripheral
258862306a36Sopenharmony_ci	   will arrive at least 1us apart if the first write is immediately read
258962306a36Sopenharmony_ci	   back with readX() and udelay(1) is called prior to the second
259062306a36Sopenharmony_ci	   writeX():
259162306a36Sopenharmony_ci
259262306a36Sopenharmony_ci		writel(42, DEVICE_REGISTER_0); // Arrives at the device...
259362306a36Sopenharmony_ci		readl(DEVICE_REGISTER_0);
259462306a36Sopenharmony_ci		udelay(1);
259562306a36Sopenharmony_ci		writel(42, DEVICE_REGISTER_1); // ...at least 1us before this.
259662306a36Sopenharmony_ci
259762306a36Sopenharmony_ci	The ordering properties of __iomem pointers obtained with non-default
259862306a36Sopenharmony_ci	attributes (e.g. those returned by ioremap_wc()) are specific to the
259962306a36Sopenharmony_ci	underlying architecture and therefore the guarantees listed above cannot
260062306a36Sopenharmony_ci	generally be relied upon for accesses to these types of mappings.
260162306a36Sopenharmony_ci
260262306a36Sopenharmony_ci (*) readX_relaxed(), writeX_relaxed():
260362306a36Sopenharmony_ci
260462306a36Sopenharmony_ci	These are similar to readX() and writeX(), but provide weaker memory
260562306a36Sopenharmony_ci	ordering guarantees. Specifically, they do not guarantee ordering with
260662306a36Sopenharmony_ci	respect to locking, normal memory accesses or delay() loops (i.e.
260762306a36Sopenharmony_ci	bullets 2-5 above) but they are still guaranteed to be ordered with
260862306a36Sopenharmony_ci	respect to other accesses from the same CPU thread to the same
260962306a36Sopenharmony_ci	peripheral when operating on __iomem pointers mapped with the default
261062306a36Sopenharmony_ci	I/O attributes.
261162306a36Sopenharmony_ci
261262306a36Sopenharmony_ci (*) readsX(), writesX():
261362306a36Sopenharmony_ci
261462306a36Sopenharmony_ci	The readsX() and writesX() MMIO accessors are designed for accessing
261562306a36Sopenharmony_ci	register-based, memory-mapped FIFOs residing on peripherals that are not
261662306a36Sopenharmony_ci	capable of performing DMA. Consequently, they provide only the ordering
261762306a36Sopenharmony_ci	guarantees of readX_relaxed() and writeX_relaxed(), as documented above.
261862306a36Sopenharmony_ci
261962306a36Sopenharmony_ci (*) inX(), outX():
262062306a36Sopenharmony_ci
262162306a36Sopenharmony_ci	The inX() and outX() accessors are intended to access legacy port-mapped
262262306a36Sopenharmony_ci	I/O peripherals, which may require special instructions on some
262362306a36Sopenharmony_ci	architectures (notably x86). The port number of the peripheral being
262462306a36Sopenharmony_ci	accessed is passed as an argument.
262562306a36Sopenharmony_ci
262662306a36Sopenharmony_ci	Since many CPU architectures ultimately access these peripherals via an
262762306a36Sopenharmony_ci	internal virtual memory mapping, the portable ordering guarantees
262862306a36Sopenharmony_ci	provided by inX() and outX() are the same as those provided by readX()
262962306a36Sopenharmony_ci	and writeX() respectively when accessing a mapping with the default I/O
263062306a36Sopenharmony_ci	attributes.
263162306a36Sopenharmony_ci
263262306a36Sopenharmony_ci	Device drivers may expect outX() to emit a non-posted write transaction
263362306a36Sopenharmony_ci	that waits for a completion response from the I/O peripheral before
263462306a36Sopenharmony_ci	returning. This is not guaranteed by all architectures and is therefore
263562306a36Sopenharmony_ci	not part of the portable ordering semantics.
263662306a36Sopenharmony_ci
263762306a36Sopenharmony_ci (*) insX(), outsX():
263862306a36Sopenharmony_ci
263962306a36Sopenharmony_ci	As above, the insX() and outsX() accessors provide the same ordering
264062306a36Sopenharmony_ci	guarantees as readsX() and writesX() respectively when accessing a
264162306a36Sopenharmony_ci	mapping with the default I/O attributes.
264262306a36Sopenharmony_ci
264362306a36Sopenharmony_ci (*) ioreadX(), iowriteX():
264462306a36Sopenharmony_ci
264562306a36Sopenharmony_ci	These will perform appropriately for the type of access they're actually
264662306a36Sopenharmony_ci	doing, be it inX()/outX() or readX()/writeX().
264762306a36Sopenharmony_ci
264862306a36Sopenharmony_ciWith the exception of the string accessors (insX(), outsX(), readsX() and
264962306a36Sopenharmony_ciwritesX()), all of the above assume that the underlying peripheral is
265062306a36Sopenharmony_cilittle-endian and will therefore perform byte-swapping operations on big-endian
265162306a36Sopenharmony_ciarchitectures.
265262306a36Sopenharmony_ci
265362306a36Sopenharmony_ci
265462306a36Sopenharmony_ci========================================
265562306a36Sopenharmony_ciASSUMED MINIMUM EXECUTION ORDERING MODEL
265662306a36Sopenharmony_ci========================================
265762306a36Sopenharmony_ci
265862306a36Sopenharmony_ciIt has to be assumed that the conceptual CPU is weakly-ordered but that it will
265962306a36Sopenharmony_cimaintain the appearance of program causality with respect to itself.  Some CPUs
266062306a36Sopenharmony_ci(such as i386 or x86_64) are more constrained than others (such as powerpc or
266162306a36Sopenharmony_cifrv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
266262306a36Sopenharmony_ciof arch-specific code.
266362306a36Sopenharmony_ci
266462306a36Sopenharmony_ciThis means that it must be considered that the CPU will execute its instruction
266562306a36Sopenharmony_cistream in any order it feels like - or even in parallel - provided that if an
266662306a36Sopenharmony_ciinstruction in the stream depends on an earlier instruction, then that
266762306a36Sopenharmony_ciearlier instruction must be sufficiently complete[*] before the later
266862306a36Sopenharmony_ciinstruction may proceed; in other words: provided that the appearance of
266962306a36Sopenharmony_cicausality is maintained.
267062306a36Sopenharmony_ci
267162306a36Sopenharmony_ci [*] Some instructions have more than one effect - such as changing the
267262306a36Sopenharmony_ci     condition codes, changing registers or changing memory - and different
267362306a36Sopenharmony_ci     instructions may depend on different effects.
267462306a36Sopenharmony_ci
267562306a36Sopenharmony_ciA CPU may also discard any instruction sequence that winds up having no
267662306a36Sopenharmony_ciultimate effect.  For example, if two adjacent instructions both load an
267762306a36Sopenharmony_ciimmediate value into the same register, the first may be discarded.
267862306a36Sopenharmony_ci
267962306a36Sopenharmony_ci
268062306a36Sopenharmony_ciSimilarly, it has to be assumed that compiler might reorder the instruction
268162306a36Sopenharmony_cistream in any way it sees fit, again provided the appearance of causality is
268262306a36Sopenharmony_cimaintained.
268362306a36Sopenharmony_ci
268462306a36Sopenharmony_ci
268562306a36Sopenharmony_ci============================
268662306a36Sopenharmony_ciTHE EFFECTS OF THE CPU CACHE
268762306a36Sopenharmony_ci============================
268862306a36Sopenharmony_ci
268962306a36Sopenharmony_ciThe way cached memory operations are perceived across the system is affected to
269062306a36Sopenharmony_cia certain extent by the caches that lie between CPUs and memory, and by the
269162306a36Sopenharmony_cimemory coherence system that maintains the consistency of state in the system.
269262306a36Sopenharmony_ci
269362306a36Sopenharmony_ciAs far as the way a CPU interacts with another part of the system through the
269462306a36Sopenharmony_cicaches goes, the memory system has to include the CPU's caches, and memory
269562306a36Sopenharmony_cibarriers for the most part act at the interface between the CPU and its cache
269662306a36Sopenharmony_ci(memory barriers logically act on the dotted line in the following diagram):
269762306a36Sopenharmony_ci
269862306a36Sopenharmony_ci	    <--- CPU --->         :       <----------- Memory ----------->
269962306a36Sopenharmony_ci	                          :
270062306a36Sopenharmony_ci	+--------+    +--------+  :   +--------+    +-----------+
270162306a36Sopenharmony_ci	|        |    |        |  :   |        |    |           |    +--------+
270262306a36Sopenharmony_ci	|  CPU   |    | Memory |  :   | CPU    |    |           |    |        |
270362306a36Sopenharmony_ci	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
270462306a36Sopenharmony_ci	|        |    | Queue  |  :   |        |    |           |--->| Memory |
270562306a36Sopenharmony_ci	|        |    |        |  :   |        |    |           |    |        |
270662306a36Sopenharmony_ci	+--------+    +--------+  :   +--------+    |           |    |        |
270762306a36Sopenharmony_ci	                          :                 | Cache     |    +--------+
270862306a36Sopenharmony_ci	                          :                 | Coherency |
270962306a36Sopenharmony_ci	                          :                 | Mechanism |    +--------+
271062306a36Sopenharmony_ci	+--------+    +--------+  :   +--------+    |           |    |	      |
271162306a36Sopenharmony_ci	|        |    |        |  :   |        |    |           |    |        |
271262306a36Sopenharmony_ci	|  CPU   |    | Memory |  :   | CPU    |    |           |--->| Device |
271362306a36Sopenharmony_ci	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
271462306a36Sopenharmony_ci	|        |    | Queue  |  :   |        |    |           |    |        |
271562306a36Sopenharmony_ci	|        |    |        |  :   |        |    |           |    +--------+
271662306a36Sopenharmony_ci	+--------+    +--------+  :   +--------+    +-----------+
271762306a36Sopenharmony_ci	                          :
271862306a36Sopenharmony_ci	                          :
271962306a36Sopenharmony_ci
272062306a36Sopenharmony_ciAlthough any particular load or store may not actually appear outside of the
272162306a36Sopenharmony_ciCPU that issued it since it may have been satisfied within the CPU's own cache,
272262306a36Sopenharmony_ciit will still appear as if the full memory access had taken place as far as the
272362306a36Sopenharmony_ciother CPUs are concerned since the cache coherency mechanisms will migrate the
272462306a36Sopenharmony_cicacheline over to the accessing CPU and propagate the effects upon conflict.
272562306a36Sopenharmony_ci
272662306a36Sopenharmony_ciThe CPU core may execute instructions in any order it deems fit, provided the
272762306a36Sopenharmony_ciexpected program causality appears to be maintained.  Some of the instructions
272862306a36Sopenharmony_cigenerate load and store operations which then go into the queue of memory
272962306a36Sopenharmony_ciaccesses to be performed.  The core may place these in the queue in any order
273062306a36Sopenharmony_ciit wishes, and continue execution until it is forced to wait for an instruction
273162306a36Sopenharmony_cito complete.
273262306a36Sopenharmony_ci
273362306a36Sopenharmony_ciWhat memory barriers are concerned with is controlling the order in which
273462306a36Sopenharmony_ciaccesses cross from the CPU side of things to the memory side of things, and
273562306a36Sopenharmony_cithe order in which the effects are perceived to happen by the other observers
273662306a36Sopenharmony_ciin the system.
273762306a36Sopenharmony_ci
273862306a36Sopenharmony_ci[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
273962306a36Sopenharmony_citheir own loads and stores as if they had happened in program order.
274062306a36Sopenharmony_ci
274162306a36Sopenharmony_ci[!] MMIO or other device accesses may bypass the cache system.  This depends on
274262306a36Sopenharmony_cithe properties of the memory window through which devices are accessed and/or
274362306a36Sopenharmony_cithe use of any special device communication instructions the CPU may have.
274462306a36Sopenharmony_ci
274562306a36Sopenharmony_ci
274662306a36Sopenharmony_ciCACHE COHERENCY VS DMA
274762306a36Sopenharmony_ci----------------------
274862306a36Sopenharmony_ci
274962306a36Sopenharmony_ciNot all systems maintain cache coherency with respect to devices doing DMA.  In
275062306a36Sopenharmony_cisuch cases, a device attempting DMA may obtain stale data from RAM because
275162306a36Sopenharmony_cidirty cache lines may be resident in the caches of various CPUs, and may not
275262306a36Sopenharmony_cihave been written back to RAM yet.  To deal with this, the appropriate part of
275362306a36Sopenharmony_cithe kernel must flush the overlapping bits of cache on each CPU (and maybe
275462306a36Sopenharmony_ciinvalidate them as well).
275562306a36Sopenharmony_ci
275662306a36Sopenharmony_ciIn addition, the data DMA'd to RAM by a device may be overwritten by dirty
275762306a36Sopenharmony_cicache lines being written back to RAM from a CPU's cache after the device has
275862306a36Sopenharmony_ciinstalled its own data, or cache lines present in the CPU's cache may simply
275962306a36Sopenharmony_ciobscure the fact that RAM has been updated, until at such time as the cacheline
276062306a36Sopenharmony_ciis discarded from the CPU's cache and reloaded.  To deal with this, the
276162306a36Sopenharmony_ciappropriate part of the kernel must invalidate the overlapping bits of the
276262306a36Sopenharmony_cicache on each CPU.
276362306a36Sopenharmony_ci
276462306a36Sopenharmony_ciSee Documentation/core-api/cachetlb.rst for more information on cache
276562306a36Sopenharmony_cimanagement.
276662306a36Sopenharmony_ci
276762306a36Sopenharmony_ci
276862306a36Sopenharmony_ciCACHE COHERENCY VS MMIO
276962306a36Sopenharmony_ci-----------------------
277062306a36Sopenharmony_ci
277162306a36Sopenharmony_ciMemory mapped I/O usually takes place through memory locations that are part of
277262306a36Sopenharmony_cia window in the CPU's memory space that has different properties assigned than
277362306a36Sopenharmony_cithe usual RAM directed window.
277462306a36Sopenharmony_ci
277562306a36Sopenharmony_ciAmongst these properties is usually the fact that such accesses bypass the
277662306a36Sopenharmony_cicaching entirely and go directly to the device buses.  This means MMIO accesses
277762306a36Sopenharmony_cimay, in effect, overtake accesses to cached memory that were emitted earlier.
277862306a36Sopenharmony_ciA memory barrier isn't sufficient in such a case, but rather the cache must be
277962306a36Sopenharmony_ciflushed between the cached memory write and the MMIO access if the two are in
278062306a36Sopenharmony_ciany way dependent.
278162306a36Sopenharmony_ci
278262306a36Sopenharmony_ci
278362306a36Sopenharmony_ci=========================
278462306a36Sopenharmony_ciTHE THINGS CPUS GET UP TO
278562306a36Sopenharmony_ci=========================
278662306a36Sopenharmony_ci
278762306a36Sopenharmony_ciA programmer might take it for granted that the CPU will perform memory
278862306a36Sopenharmony_cioperations in exactly the order specified, so that if the CPU is, for example,
278962306a36Sopenharmony_cigiven the following piece of code to execute:
279062306a36Sopenharmony_ci
279162306a36Sopenharmony_ci	a = READ_ONCE(*A);
279262306a36Sopenharmony_ci	WRITE_ONCE(*B, b);
279362306a36Sopenharmony_ci	c = READ_ONCE(*C);
279462306a36Sopenharmony_ci	d = READ_ONCE(*D);
279562306a36Sopenharmony_ci	WRITE_ONCE(*E, e);
279662306a36Sopenharmony_ci
279762306a36Sopenharmony_cithey would then expect that the CPU will complete the memory operation for each
279862306a36Sopenharmony_ciinstruction before moving on to the next one, leading to a definite sequence of
279962306a36Sopenharmony_cioperations as seen by external observers in the system:
280062306a36Sopenharmony_ci
280162306a36Sopenharmony_ci	LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
280262306a36Sopenharmony_ci
280362306a36Sopenharmony_ci
280462306a36Sopenharmony_ciReality is, of course, much messier.  With many CPUs and compilers, the above
280562306a36Sopenharmony_ciassumption doesn't hold because:
280662306a36Sopenharmony_ci
280762306a36Sopenharmony_ci (*) loads are more likely to need to be completed immediately to permit
280862306a36Sopenharmony_ci     execution progress, whereas stores can often be deferred without a
280962306a36Sopenharmony_ci     problem;
281062306a36Sopenharmony_ci
281162306a36Sopenharmony_ci (*) loads may be done speculatively, and the result discarded should it prove
281262306a36Sopenharmony_ci     to have been unnecessary;
281362306a36Sopenharmony_ci
281462306a36Sopenharmony_ci (*) loads may be done speculatively, leading to the result having been fetched
281562306a36Sopenharmony_ci     at the wrong time in the expected sequence of events;
281662306a36Sopenharmony_ci
281762306a36Sopenharmony_ci (*) the order of the memory accesses may be rearranged to promote better use
281862306a36Sopenharmony_ci     of the CPU buses and caches;
281962306a36Sopenharmony_ci
282062306a36Sopenharmony_ci (*) loads and stores may be combined to improve performance when talking to
282162306a36Sopenharmony_ci     memory or I/O hardware that can do batched accesses of adjacent locations,
282262306a36Sopenharmony_ci     thus cutting down on transaction setup costs (memory and PCI devices may
282362306a36Sopenharmony_ci     both be able to do this); and
282462306a36Sopenharmony_ci
282562306a36Sopenharmony_ci (*) the CPU's data cache may affect the ordering, and while cache-coherency
282662306a36Sopenharmony_ci     mechanisms may alleviate this - once the store has actually hit the cache
282762306a36Sopenharmony_ci     - there's no guarantee that the coherency management will be propagated in
282862306a36Sopenharmony_ci     order to other CPUs.
282962306a36Sopenharmony_ci
283062306a36Sopenharmony_ciSo what another CPU, say, might actually observe from the above piece of code
283162306a36Sopenharmony_ciis:
283262306a36Sopenharmony_ci
283362306a36Sopenharmony_ci	LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
283462306a36Sopenharmony_ci
283562306a36Sopenharmony_ci	(Where "LOAD {*C,*D}" is a combined load)
283662306a36Sopenharmony_ci
283762306a36Sopenharmony_ci
283862306a36Sopenharmony_ciHowever, it is guaranteed that a CPU will be self-consistent: it will see its
283962306a36Sopenharmony_ci_own_ accesses appear to be correctly ordered, without the need for a memory
284062306a36Sopenharmony_cibarrier.  For instance with the following code:
284162306a36Sopenharmony_ci
284262306a36Sopenharmony_ci	U = READ_ONCE(*A);
284362306a36Sopenharmony_ci	WRITE_ONCE(*A, V);
284462306a36Sopenharmony_ci	WRITE_ONCE(*A, W);
284562306a36Sopenharmony_ci	X = READ_ONCE(*A);
284662306a36Sopenharmony_ci	WRITE_ONCE(*A, Y);
284762306a36Sopenharmony_ci	Z = READ_ONCE(*A);
284862306a36Sopenharmony_ci
284962306a36Sopenharmony_ciand assuming no intervention by an external influence, it can be assumed that
285062306a36Sopenharmony_cithe final result will appear to be:
285162306a36Sopenharmony_ci
285262306a36Sopenharmony_ci	U == the original value of *A
285362306a36Sopenharmony_ci	X == W
285462306a36Sopenharmony_ci	Z == Y
285562306a36Sopenharmony_ci	*A == Y
285662306a36Sopenharmony_ci
285762306a36Sopenharmony_ciThe code above may cause the CPU to generate the full sequence of memory
285862306a36Sopenharmony_ciaccesses:
285962306a36Sopenharmony_ci
286062306a36Sopenharmony_ci	U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
286162306a36Sopenharmony_ci
286262306a36Sopenharmony_ciin that order, but, without intervention, the sequence may have almost any
286362306a36Sopenharmony_cicombination of elements combined or discarded, provided the program's view
286462306a36Sopenharmony_ciof the world remains consistent.  Note that READ_ONCE() and WRITE_ONCE()
286562306a36Sopenharmony_ciare -not- optional in the above example, as there are architectures
286662306a36Sopenharmony_ciwhere a given CPU might reorder successive loads to the same location.
286762306a36Sopenharmony_ciOn such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
286862306a36Sopenharmony_cinecessary to prevent this, for example, on Itanium the volatile casts
286962306a36Sopenharmony_ciused by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
287062306a36Sopenharmony_ciand st.rel instructions (respectively) that prevent such reordering.
287162306a36Sopenharmony_ci
287262306a36Sopenharmony_ciThe compiler may also combine, discard or defer elements of the sequence before
287362306a36Sopenharmony_cithe CPU even sees them.
287462306a36Sopenharmony_ci
287562306a36Sopenharmony_ciFor instance:
287662306a36Sopenharmony_ci
287762306a36Sopenharmony_ci	*A = V;
287862306a36Sopenharmony_ci	*A = W;
287962306a36Sopenharmony_ci
288062306a36Sopenharmony_cimay be reduced to:
288162306a36Sopenharmony_ci
288262306a36Sopenharmony_ci	*A = W;
288362306a36Sopenharmony_ci
288462306a36Sopenharmony_cisince, without either a write barrier or an WRITE_ONCE(), it can be
288562306a36Sopenharmony_ciassumed that the effect of the storage of V to *A is lost.  Similarly:
288662306a36Sopenharmony_ci
288762306a36Sopenharmony_ci	*A = Y;
288862306a36Sopenharmony_ci	Z = *A;
288962306a36Sopenharmony_ci
289062306a36Sopenharmony_cimay, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
289162306a36Sopenharmony_cireduced to:
289262306a36Sopenharmony_ci
289362306a36Sopenharmony_ci	*A = Y;
289462306a36Sopenharmony_ci	Z = Y;
289562306a36Sopenharmony_ci
289662306a36Sopenharmony_ciand the LOAD operation never appear outside of the CPU.
289762306a36Sopenharmony_ci
289862306a36Sopenharmony_ci
289962306a36Sopenharmony_ciAND THEN THERE'S THE ALPHA
290062306a36Sopenharmony_ci--------------------------
290162306a36Sopenharmony_ci
290262306a36Sopenharmony_ciThe DEC Alpha CPU is one of the most relaxed CPUs there is.  Not only that,
290362306a36Sopenharmony_cisome versions of the Alpha CPU have a split data cache, permitting them to have
290462306a36Sopenharmony_citwo semantically-related cache lines updated at separate times.  This is where
290562306a36Sopenharmony_cithe address-dependency barrier really becomes necessary as this synchronises
290662306a36Sopenharmony_ciboth caches with the memory coherence system, thus making it seem like pointer
290762306a36Sopenharmony_cichanges vs new data occur in the right order.
290862306a36Sopenharmony_ci
290962306a36Sopenharmony_ciThe Alpha defines the Linux kernel's memory model, although as of v4.15
291062306a36Sopenharmony_cithe Linux kernel's addition of smp_mb() to READ_ONCE() on Alpha greatly
291162306a36Sopenharmony_cireduced its impact on the memory model.
291262306a36Sopenharmony_ci
291362306a36Sopenharmony_ci
291462306a36Sopenharmony_ciVIRTUAL MACHINE GUESTS
291562306a36Sopenharmony_ci----------------------
291662306a36Sopenharmony_ci
291762306a36Sopenharmony_ciGuests running within virtual machines might be affected by SMP effects even if
291862306a36Sopenharmony_cithe guest itself is compiled without SMP support.  This is an artifact of
291962306a36Sopenharmony_ciinterfacing with an SMP host while running an UP kernel.  Using mandatory
292062306a36Sopenharmony_cibarriers for this use-case would be possible but is often suboptimal.
292162306a36Sopenharmony_ci
292262306a36Sopenharmony_ciTo handle this case optimally, low-level virt_mb() etc macros are available.
292362306a36Sopenharmony_ciThese have the same effect as smp_mb() etc when SMP is enabled, but generate
292462306a36Sopenharmony_ciidentical code for SMP and non-SMP systems.  For example, virtual machine guests
292562306a36Sopenharmony_cishould use virt_mb() rather than smp_mb() when synchronizing against a
292662306a36Sopenharmony_ci(possibly SMP) host.
292762306a36Sopenharmony_ci
292862306a36Sopenharmony_ciThese are equivalent to smp_mb() etc counterparts in all other respects,
292962306a36Sopenharmony_ciin particular, they do not control MMIO effects: to control
293062306a36Sopenharmony_ciMMIO effects, use mandatory barriers.
293162306a36Sopenharmony_ci
293262306a36Sopenharmony_ci
293362306a36Sopenharmony_ci============
293462306a36Sopenharmony_ciEXAMPLE USES
293562306a36Sopenharmony_ci============
293662306a36Sopenharmony_ci
293762306a36Sopenharmony_ciCIRCULAR BUFFERS
293862306a36Sopenharmony_ci----------------
293962306a36Sopenharmony_ci
294062306a36Sopenharmony_ciMemory barriers can be used to implement circular buffering without the need
294162306a36Sopenharmony_ciof a lock to serialise the producer with the consumer.  See:
294262306a36Sopenharmony_ci
294362306a36Sopenharmony_ci	Documentation/core-api/circular-buffers.rst
294462306a36Sopenharmony_ci
294562306a36Sopenharmony_cifor details.
294662306a36Sopenharmony_ci
294762306a36Sopenharmony_ci
294862306a36Sopenharmony_ci==========
294962306a36Sopenharmony_ciREFERENCES
295062306a36Sopenharmony_ci==========
295162306a36Sopenharmony_ci
295262306a36Sopenharmony_ciAlpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
295362306a36Sopenharmony_ciDigital Press)
295462306a36Sopenharmony_ci	Chapter 5.2: Physical Address Space Characteristics
295562306a36Sopenharmony_ci	Chapter 5.4: Caches and Write Buffers
295662306a36Sopenharmony_ci	Chapter 5.5: Data Sharing
295762306a36Sopenharmony_ci	Chapter 5.6: Read/Write Ordering
295862306a36Sopenharmony_ci
295962306a36Sopenharmony_ciAMD64 Architecture Programmer's Manual Volume 2: System Programming
296062306a36Sopenharmony_ci	Chapter 7.1: Memory-Access Ordering
296162306a36Sopenharmony_ci	Chapter 7.4: Buffering and Combining Memory Writes
296262306a36Sopenharmony_ci
296362306a36Sopenharmony_ciARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile)
296462306a36Sopenharmony_ci	Chapter B2: The AArch64 Application Level Memory Model
296562306a36Sopenharmony_ci
296662306a36Sopenharmony_ciIA-32 Intel Architecture Software Developer's Manual, Volume 3:
296762306a36Sopenharmony_ciSystem Programming Guide
296862306a36Sopenharmony_ci	Chapter 7.1: Locked Atomic Operations
296962306a36Sopenharmony_ci	Chapter 7.2: Memory Ordering
297062306a36Sopenharmony_ci	Chapter 7.4: Serializing Instructions
297162306a36Sopenharmony_ci
297262306a36Sopenharmony_ciThe SPARC Architecture Manual, Version 9
297362306a36Sopenharmony_ci	Chapter 8: Memory Models
297462306a36Sopenharmony_ci	Appendix D: Formal Specification of the Memory Models
297562306a36Sopenharmony_ci	Appendix J: Programming with the Memory Models
297662306a36Sopenharmony_ci
297762306a36Sopenharmony_ciStorage in the PowerPC (Stone and Fitzgerald)
297862306a36Sopenharmony_ci
297962306a36Sopenharmony_ciUltraSPARC Programmer Reference Manual
298062306a36Sopenharmony_ci	Chapter 5: Memory Accesses and Cacheability
298162306a36Sopenharmony_ci	Chapter 15: Sparc-V9 Memory Models
298262306a36Sopenharmony_ci
298362306a36Sopenharmony_ciUltraSPARC III Cu User's Manual
298462306a36Sopenharmony_ci	Chapter 9: Memory Models
298562306a36Sopenharmony_ci
298662306a36Sopenharmony_ciUltraSPARC IIIi Processor User's Manual
298762306a36Sopenharmony_ci	Chapter 8: Memory Models
298862306a36Sopenharmony_ci
298962306a36Sopenharmony_ciUltraSPARC Architecture 2005
299062306a36Sopenharmony_ci	Chapter 9: Memory
299162306a36Sopenharmony_ci	Appendix D: Formal Specifications of the Memory Models
299262306a36Sopenharmony_ci
299362306a36Sopenharmony_ciUltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
299462306a36Sopenharmony_ci	Chapter 8: Memory Models
299562306a36Sopenharmony_ci	Appendix F: Caches and Cache Coherency
299662306a36Sopenharmony_ci
299762306a36Sopenharmony_ciSolaris Internals, Core Kernel Architecture, p63-68:
299862306a36Sopenharmony_ci	Chapter 3.3: Hardware Considerations for Locks and
299962306a36Sopenharmony_ci			Synchronization
300062306a36Sopenharmony_ci
300162306a36Sopenharmony_ciUnix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
300262306a36Sopenharmony_cifor Kernel Programmers:
300362306a36Sopenharmony_ci	Chapter 13: Other Memory Models
300462306a36Sopenharmony_ci
300562306a36Sopenharmony_ciIntel Itanium Architecture Software Developer's Manual: Volume 1:
300662306a36Sopenharmony_ci	Section 2.6: Speculation
300762306a36Sopenharmony_ci	Section 4.4: Memory Access
3008