xref: /third_party/rust/crates/regex/src/pool.rs (revision c67d6573)
1c67d6573Sopenharmony_ci// This module provides a relatively simple thread-safe pool of reusable
2c67d6573Sopenharmony_ci// objects. For the most part, it's implemented by a stack represented by a
3c67d6573Sopenharmony_ci// Mutex<Vec<T>>. It has one small trick: because unlocking a mutex is somewhat
4c67d6573Sopenharmony_ci// costly, in the case where a pool is accessed by the first thread that tried
5c67d6573Sopenharmony_ci// to get a value, we bypass the mutex. Here are some benchmarks showing the
6c67d6573Sopenharmony_ci// difference.
7c67d6573Sopenharmony_ci//
8c67d6573Sopenharmony_ci// 1) misc::anchored_literal_long_non_match    21 (18571 MB/s)
9c67d6573Sopenharmony_ci// 2) misc::anchored_literal_long_non_match   107 (3644 MB/s)
10c67d6573Sopenharmony_ci// 3) misc::anchored_literal_long_non_match    45 (8666 MB/s)
11c67d6573Sopenharmony_ci// 4) misc::anchored_literal_long_non_match    19 (20526 MB/s)
12c67d6573Sopenharmony_ci//
13c67d6573Sopenharmony_ci// (1) represents our baseline: the master branch at the time of writing when
14c67d6573Sopenharmony_ci// using the 'thread_local' crate to implement the pool below.
15c67d6573Sopenharmony_ci//
16c67d6573Sopenharmony_ci// (2) represents a naive pool implemented completely via Mutex<Vec<T>>. There
17c67d6573Sopenharmony_ci// is no special trick for bypassing the mutex.
18c67d6573Sopenharmony_ci//
19c67d6573Sopenharmony_ci// (3) is the same as (2), except it uses Mutex<Vec<Box<T>>>. It is twice as
20c67d6573Sopenharmony_ci// fast because a Box<T> is much smaller than the T we use with a Pool in this
21c67d6573Sopenharmony_ci// crate. So pushing and popping a Box<T> from a Vec is quite a bit faster
22c67d6573Sopenharmony_ci// than for T.
23c67d6573Sopenharmony_ci//
24c67d6573Sopenharmony_ci// (4) is the same as (3), but with the trick for bypassing the mutex in the
25c67d6573Sopenharmony_ci// case of the first-to-get thread.
26c67d6573Sopenharmony_ci//
27c67d6573Sopenharmony_ci// Why move off of thread_local? Even though (4) is a hair faster than (1)
28c67d6573Sopenharmony_ci// above, this was not the main goal. The main goal was to move off of
29c67d6573Sopenharmony_ci// thread_local and find a way to *simply* re-capture some of its speed for
30c67d6573Sopenharmony_ci// regex's specific case. So again, why move off of it? The *primary* reason is
31c67d6573Sopenharmony_ci// because of memory leaks. See https://github.com/rust-lang/regex/issues/362
32c67d6573Sopenharmony_ci// for example. (Why do I want it to be simple? Well, I suppose what I mean is,
33c67d6573Sopenharmony_ci// "use as much safe code as possible to minimize risk and be as sure as I can
34c67d6573Sopenharmony_ci// be that it is correct.")
35c67d6573Sopenharmony_ci//
36c67d6573Sopenharmony_ci// My guess is that the thread_local design is probably not appropriate for
37c67d6573Sopenharmony_ci// regex since its memory usage scales to the number of active threads that
38c67d6573Sopenharmony_ci// have used a regex, where as the pool below scales to the number of threads
39c67d6573Sopenharmony_ci// that simultaneously use a regex. While neither case permits contraction,
40c67d6573Sopenharmony_ci// since we own the pool data structure below, we can add contraction if a
41c67d6573Sopenharmony_ci// clear use case pops up in the wild. More pressingly though, it seems that
42c67d6573Sopenharmony_ci// there are at least some use case patterns where one might have many threads
43c67d6573Sopenharmony_ci// sitting around that might have used a regex at one point. While thread_local
44c67d6573Sopenharmony_ci// does try to reuse space previously used by a thread that has since stopped,
45c67d6573Sopenharmony_ci// its maximal memory usage still scales with the total number of active
46c67d6573Sopenharmony_ci// threads. In contrast, the pool below scales with the total number of threads
47c67d6573Sopenharmony_ci// *simultaneously* using the pool. The hope is that this uses less memory
48c67d6573Sopenharmony_ci// overall. And if it doesn't, we can hopefully tune it somehow.
49c67d6573Sopenharmony_ci//
50c67d6573Sopenharmony_ci// It seems that these sort of conditions happen frequently
51c67d6573Sopenharmony_ci// in FFI inside of other more "managed" languages. This was
52c67d6573Sopenharmony_ci// mentioned in the issue linked above, and also mentioned here:
53c67d6573Sopenharmony_ci// https://github.com/BurntSushi/rure-go/issues/3. And in particular, users
54c67d6573Sopenharmony_ci// confirm that disabling the use of thread_local resolves the leak.
55c67d6573Sopenharmony_ci//
56c67d6573Sopenharmony_ci// There were other weaker reasons for moving off of thread_local as well.
57c67d6573Sopenharmony_ci// Namely, at the time, I was looking to reduce dependencies. And for something
58c67d6573Sopenharmony_ci// like regex, maintenance can be simpler when we own the full dependency tree.
59c67d6573Sopenharmony_ci
60c67d6573Sopenharmony_ciuse std::panic::{RefUnwindSafe, UnwindSafe};
61c67d6573Sopenharmony_ciuse std::sync::atomic::{AtomicUsize, Ordering};
62c67d6573Sopenharmony_ciuse std::sync::Mutex;
63c67d6573Sopenharmony_ci
64c67d6573Sopenharmony_ci/// An atomic counter used to allocate thread IDs.
65c67d6573Sopenharmony_cistatic COUNTER: AtomicUsize = AtomicUsize::new(1);
66c67d6573Sopenharmony_ci
67c67d6573Sopenharmony_cithread_local!(
68c67d6573Sopenharmony_ci    /// A thread local used to assign an ID to a thread.
69c67d6573Sopenharmony_ci    static THREAD_ID: usize = {
70c67d6573Sopenharmony_ci        let next = COUNTER.fetch_add(1, Ordering::Relaxed);
71c67d6573Sopenharmony_ci        // SAFETY: We cannot permit the reuse of thread IDs since reusing a
72c67d6573Sopenharmony_ci        // thread ID might result in more than one thread "owning" a pool,
73c67d6573Sopenharmony_ci        // and thus, permit accessing a mutable value from multiple threads
74c67d6573Sopenharmony_ci        // simultaneously without synchronization. The intent of this panic is
75c67d6573Sopenharmony_ci        // to be a sanity check. It is not expected that the thread ID space
76c67d6573Sopenharmony_ci        // will actually be exhausted in practice.
77c67d6573Sopenharmony_ci        //
78c67d6573Sopenharmony_ci        // This checks that the counter never wraps around, since atomic
79c67d6573Sopenharmony_ci        // addition wraps around on overflow.
80c67d6573Sopenharmony_ci        if next == 0 {
81c67d6573Sopenharmony_ci            panic!("regex: thread ID allocation space exhausted");
82c67d6573Sopenharmony_ci        }
83c67d6573Sopenharmony_ci        next
84c67d6573Sopenharmony_ci    };
85c67d6573Sopenharmony_ci);
86c67d6573Sopenharmony_ci
87c67d6573Sopenharmony_ci/// The type of the function used to create values in a pool when the pool is
88c67d6573Sopenharmony_ci/// empty and the caller requests one.
89c67d6573Sopenharmony_citype CreateFn<T> =
90c67d6573Sopenharmony_ci    Box<dyn Fn() -> T + Send + Sync + UnwindSafe + RefUnwindSafe + 'static>;
91c67d6573Sopenharmony_ci
92c67d6573Sopenharmony_ci/// A simple thread safe pool for reusing values.
93c67d6573Sopenharmony_ci///
94c67d6573Sopenharmony_ci/// Getting a value out comes with a guard. When that guard is dropped, the
95c67d6573Sopenharmony_ci/// value is automatically put back in the pool.
96c67d6573Sopenharmony_ci///
97c67d6573Sopenharmony_ci/// A Pool<T> impls Sync when T is Send (even if it's not Sync). This means
98c67d6573Sopenharmony_ci/// that T can use interior mutability. This is possible because a pool is
99c67d6573Sopenharmony_ci/// guaranteed to provide a value to exactly one thread at any time.
100c67d6573Sopenharmony_ci///
101c67d6573Sopenharmony_ci/// Currently, a pool never contracts in size. Its size is proportional to the
102c67d6573Sopenharmony_ci/// number of simultaneous uses.
103c67d6573Sopenharmony_cipub struct Pool<T> {
104c67d6573Sopenharmony_ci    /// A stack of T values to hand out. These are used when a Pool is
105c67d6573Sopenharmony_ci    /// accessed by a thread that didn't create it.
106c67d6573Sopenharmony_ci    stack: Mutex<Vec<Box<T>>>,
107c67d6573Sopenharmony_ci    /// A function to create more T values when stack is empty and a caller
108c67d6573Sopenharmony_ci    /// has requested a T.
109c67d6573Sopenharmony_ci    create: CreateFn<T>,
110c67d6573Sopenharmony_ci    /// The ID of the thread that owns this pool. The owner is the thread
111c67d6573Sopenharmony_ci    /// that makes the first call to 'get'. When the owner calls 'get', it
112c67d6573Sopenharmony_ci    /// gets 'owner_val' directly instead of returning a T from 'stack'.
113c67d6573Sopenharmony_ci    /// See comments elsewhere for details, but this is intended to be an
114c67d6573Sopenharmony_ci    /// optimization for the common case that makes getting a T faster.
115c67d6573Sopenharmony_ci    ///
116c67d6573Sopenharmony_ci    /// It is initialized to a value of zero (an impossible thread ID) as a
117c67d6573Sopenharmony_ci    /// sentinel to indicate that it is unowned.
118c67d6573Sopenharmony_ci    owner: AtomicUsize,
119c67d6573Sopenharmony_ci    /// A value to return when the caller is in the same thread that created
120c67d6573Sopenharmony_ci    /// the Pool.
121c67d6573Sopenharmony_ci    owner_val: T,
122c67d6573Sopenharmony_ci}
123c67d6573Sopenharmony_ci
124c67d6573Sopenharmony_ci// SAFETY: Since we want to use a Pool from multiple threads simultaneously
125c67d6573Sopenharmony_ci// behind an Arc, we need for it to be Sync. In cases where T is sync, Pool<T>
126c67d6573Sopenharmony_ci// would be Sync. However, since we use a Pool to store mutable scratch space,
127c67d6573Sopenharmony_ci// we wind up using a T that has interior mutability and is thus itself not
128c67d6573Sopenharmony_ci// Sync. So what we *really* want is for our Pool<T> to by Sync even when T is
129c67d6573Sopenharmony_ci// not Sync (but is at least Send).
130c67d6573Sopenharmony_ci//
131c67d6573Sopenharmony_ci// The only non-sync aspect of a Pool is its 'owner_val' field, which is used
132c67d6573Sopenharmony_ci// to implement faster access to a pool value in the common case of a pool
133c67d6573Sopenharmony_ci// being accessed in the same thread in which it was created. The 'stack' field
134c67d6573Sopenharmony_ci// is also shared, but a Mutex<T> where T: Send is already Sync. So we only
135c67d6573Sopenharmony_ci// need to worry about 'owner_val'.
136c67d6573Sopenharmony_ci//
137c67d6573Sopenharmony_ci// The key is to guarantee that 'owner_val' can only ever be accessed from one
138c67d6573Sopenharmony_ci// thread. In our implementation below, we guarantee this by only returning the
139c67d6573Sopenharmony_ci// 'owner_val' when the ID of the current thread matches the ID of the thread
140c67d6573Sopenharmony_ci// that created the Pool. Since this can only ever be one thread, it follows
141c67d6573Sopenharmony_ci// that only one thread can access 'owner_val' at any point in time. Thus, it
142c67d6573Sopenharmony_ci// is safe to declare that Pool<T> is Sync when T is Send.
143c67d6573Sopenharmony_ci//
144c67d6573Sopenharmony_ci// NOTE: It would also be possible to make the owning thread be the *first*
145c67d6573Sopenharmony_ci// thread that tries to get a value out of a Pool. However, the current
146c67d6573Sopenharmony_ci// implementation is a little simpler and it's not clear if making the first
147c67d6573Sopenharmony_ci// thread (rather than the creating thread) is meaningfully better.
148c67d6573Sopenharmony_ci//
149c67d6573Sopenharmony_ci// If there is a way to achieve our performance goals using safe code, then
150c67d6573Sopenharmony_ci// I would very much welcome a patch. As it stands, the implementation below
151c67d6573Sopenharmony_ci// tries to balance safety with performance. The case where a Regex is used
152c67d6573Sopenharmony_ci// from multiple threads simultaneously will suffer a bit since getting a cache
153c67d6573Sopenharmony_ci// will require unlocking a mutex.
154c67d6573Sopenharmony_ciunsafe impl<T: Send> Sync for Pool<T> {}
155c67d6573Sopenharmony_ci
156c67d6573Sopenharmony_ciimpl<T: ::std::fmt::Debug> ::std::fmt::Debug for Pool<T> {
157c67d6573Sopenharmony_ci    fn fmt(&self, f: &mut ::std::fmt::Formatter<'_>) -> ::std::fmt::Result {
158c67d6573Sopenharmony_ci        f.debug_struct("Pool")
159c67d6573Sopenharmony_ci            .field("stack", &self.stack)
160c67d6573Sopenharmony_ci            .field("owner", &self.owner)
161c67d6573Sopenharmony_ci            .field("owner_val", &self.owner_val)
162c67d6573Sopenharmony_ci            .finish()
163c67d6573Sopenharmony_ci    }
164c67d6573Sopenharmony_ci}
165c67d6573Sopenharmony_ci
166c67d6573Sopenharmony_ci/// A guard that is returned when a caller requests a value from the pool.
167c67d6573Sopenharmony_ci///
168c67d6573Sopenharmony_ci/// The purpose of the guard is to use RAII to automatically put the value back
169c67d6573Sopenharmony_ci/// in the pool once it's dropped.
170c67d6573Sopenharmony_ci#[derive(Debug)]
171c67d6573Sopenharmony_cipub struct PoolGuard<'a, T: Send> {
172c67d6573Sopenharmony_ci    /// The pool that this guard is attached to.
173c67d6573Sopenharmony_ci    pool: &'a Pool<T>,
174c67d6573Sopenharmony_ci    /// This is None when the guard represents the special "owned" value. In
175c67d6573Sopenharmony_ci    /// which case, the value is retrieved from 'pool.owner_val'.
176c67d6573Sopenharmony_ci    value: Option<Box<T>>,
177c67d6573Sopenharmony_ci}
178c67d6573Sopenharmony_ci
179c67d6573Sopenharmony_ciimpl<T: Send> Pool<T> {
180c67d6573Sopenharmony_ci    /// Create a new pool. The given closure is used to create values in the
181c67d6573Sopenharmony_ci    /// pool when necessary.
182c67d6573Sopenharmony_ci    pub fn new(create: CreateFn<T>) -> Pool<T> {
183c67d6573Sopenharmony_ci        let owner = AtomicUsize::new(0);
184c67d6573Sopenharmony_ci        let owner_val = create();
185c67d6573Sopenharmony_ci        Pool { stack: Mutex::new(vec![]), create, owner, owner_val }
186c67d6573Sopenharmony_ci    }
187c67d6573Sopenharmony_ci
188c67d6573Sopenharmony_ci    /// Get a value from the pool. The caller is guaranteed to have exclusive
189c67d6573Sopenharmony_ci    /// access to the given value.
190c67d6573Sopenharmony_ci    ///
191c67d6573Sopenharmony_ci    /// Note that there is no guarantee provided about which value in the
192c67d6573Sopenharmony_ci    /// pool is returned. That is, calling get, dropping the guard (causing
193c67d6573Sopenharmony_ci    /// the value to go back into the pool) and then calling get again is NOT
194c67d6573Sopenharmony_ci    /// guaranteed to return the same value received in the first get call.
195c67d6573Sopenharmony_ci    #[cfg_attr(feature = "perf-inline", inline(always))]
196c67d6573Sopenharmony_ci    pub fn get(&self) -> PoolGuard<'_, T> {
197c67d6573Sopenharmony_ci        // Our fast path checks if the caller is the thread that "owns" this
198c67d6573Sopenharmony_ci        // pool. Or stated differently, whether it is the first thread that
199c67d6573Sopenharmony_ci        // tried to extract a value from the pool. If it is, then we can return
200c67d6573Sopenharmony_ci        // a T to the caller without going through a mutex.
201c67d6573Sopenharmony_ci        //
202c67d6573Sopenharmony_ci        // SAFETY: We must guarantee that only one thread gets access to this
203c67d6573Sopenharmony_ci        // value. Since a thread is uniquely identified by the THREAD_ID thread
204c67d6573Sopenharmony_ci        // local, it follows that is the caller's thread ID is equal to the
205c67d6573Sopenharmony_ci        // owner, then only one thread may receive this value.
206c67d6573Sopenharmony_ci        let caller = THREAD_ID.with(|id| *id);
207c67d6573Sopenharmony_ci        let owner = self.owner.load(Ordering::Relaxed);
208c67d6573Sopenharmony_ci        if caller == owner {
209c67d6573Sopenharmony_ci            return self.guard_owned();
210c67d6573Sopenharmony_ci        }
211c67d6573Sopenharmony_ci        self.get_slow(caller, owner)
212c67d6573Sopenharmony_ci    }
213c67d6573Sopenharmony_ci
214c67d6573Sopenharmony_ci    /// This is the "slow" version that goes through a mutex to pop an
215c67d6573Sopenharmony_ci    /// allocated value off a stack to return to the caller. (Or, if the stack
216c67d6573Sopenharmony_ci    /// is empty, a new value is created.)
217c67d6573Sopenharmony_ci    ///
218c67d6573Sopenharmony_ci    /// If the pool has no owner, then this will set the owner.
219c67d6573Sopenharmony_ci    #[cold]
220c67d6573Sopenharmony_ci    fn get_slow(&self, caller: usize, owner: usize) -> PoolGuard<'_, T> {
221c67d6573Sopenharmony_ci        use std::sync::atomic::Ordering::Relaxed;
222c67d6573Sopenharmony_ci
223c67d6573Sopenharmony_ci        if owner == 0 {
224c67d6573Sopenharmony_ci            // The sentinel 0 value means this pool is not yet owned. We
225c67d6573Sopenharmony_ci            // try to atomically set the owner. If we do, then this thread
226c67d6573Sopenharmony_ci            // becomes the owner and we can return a guard that represents
227c67d6573Sopenharmony_ci            // the special T for the owner.
228c67d6573Sopenharmony_ci            let res = self.owner.compare_exchange(0, caller, Relaxed, Relaxed);
229c67d6573Sopenharmony_ci            if res.is_ok() {
230c67d6573Sopenharmony_ci                return self.guard_owned();
231c67d6573Sopenharmony_ci            }
232c67d6573Sopenharmony_ci        }
233c67d6573Sopenharmony_ci        let mut stack = self.stack.lock().unwrap();
234c67d6573Sopenharmony_ci        let value = match stack.pop() {
235c67d6573Sopenharmony_ci            None => Box::new((self.create)()),
236c67d6573Sopenharmony_ci            Some(value) => value,
237c67d6573Sopenharmony_ci        };
238c67d6573Sopenharmony_ci        self.guard_stack(value)
239c67d6573Sopenharmony_ci    }
240c67d6573Sopenharmony_ci
241c67d6573Sopenharmony_ci    /// Puts a value back into the pool. Callers don't need to call this. Once
242c67d6573Sopenharmony_ci    /// the guard that's returned by 'get' is dropped, it is put back into the
243c67d6573Sopenharmony_ci    /// pool automatically.
244c67d6573Sopenharmony_ci    fn put(&self, value: Box<T>) {
245c67d6573Sopenharmony_ci        let mut stack = self.stack.lock().unwrap();
246c67d6573Sopenharmony_ci        stack.push(value);
247c67d6573Sopenharmony_ci    }
248c67d6573Sopenharmony_ci
249c67d6573Sopenharmony_ci    /// Create a guard that represents the special owned T.
250c67d6573Sopenharmony_ci    fn guard_owned(&self) -> PoolGuard<'_, T> {
251c67d6573Sopenharmony_ci        PoolGuard { pool: self, value: None }
252c67d6573Sopenharmony_ci    }
253c67d6573Sopenharmony_ci
254c67d6573Sopenharmony_ci    /// Create a guard that contains a value from the pool's stack.
255c67d6573Sopenharmony_ci    fn guard_stack(&self, value: Box<T>) -> PoolGuard<'_, T> {
256c67d6573Sopenharmony_ci        PoolGuard { pool: self, value: Some(value) }
257c67d6573Sopenharmony_ci    }
258c67d6573Sopenharmony_ci}
259c67d6573Sopenharmony_ci
260c67d6573Sopenharmony_ciimpl<'a, T: Send> PoolGuard<'a, T> {
261c67d6573Sopenharmony_ci    /// Return the underlying value.
262c67d6573Sopenharmony_ci    pub fn value(&self) -> &T {
263c67d6573Sopenharmony_ci        match self.value {
264c67d6573Sopenharmony_ci            None => &self.pool.owner_val,
265c67d6573Sopenharmony_ci            Some(ref v) => &**v,
266c67d6573Sopenharmony_ci        }
267c67d6573Sopenharmony_ci    }
268c67d6573Sopenharmony_ci}
269c67d6573Sopenharmony_ci
270c67d6573Sopenharmony_ciimpl<'a, T: Send> Drop for PoolGuard<'a, T> {
271c67d6573Sopenharmony_ci    #[cfg_attr(feature = "perf-inline", inline(always))]
272c67d6573Sopenharmony_ci    fn drop(&mut self) {
273c67d6573Sopenharmony_ci        if let Some(value) = self.value.take() {
274c67d6573Sopenharmony_ci            self.pool.put(value);
275c67d6573Sopenharmony_ci        }
276c67d6573Sopenharmony_ci    }
277c67d6573Sopenharmony_ci}
278c67d6573Sopenharmony_ci
279c67d6573Sopenharmony_ci#[cfg(test)]
280c67d6573Sopenharmony_cimod tests {
281c67d6573Sopenharmony_ci    use std::panic::{RefUnwindSafe, UnwindSafe};
282c67d6573Sopenharmony_ci
283c67d6573Sopenharmony_ci    use super::*;
284c67d6573Sopenharmony_ci
285c67d6573Sopenharmony_ci    #[test]
286c67d6573Sopenharmony_ci    fn oibits() {
287c67d6573Sopenharmony_ci        use crate::exec::ProgramCache;
288c67d6573Sopenharmony_ci
289c67d6573Sopenharmony_ci        fn has_oibits<T: Send + Sync + UnwindSafe + RefUnwindSafe>() {}
290c67d6573Sopenharmony_ci        has_oibits::<Pool<ProgramCache>>();
291c67d6573Sopenharmony_ci    }
292c67d6573Sopenharmony_ci
293c67d6573Sopenharmony_ci    // Tests that Pool implements the "single owner" optimization. That is, the
294c67d6573Sopenharmony_ci    // thread that first accesses the pool gets its own copy, while all other
295c67d6573Sopenharmony_ci    // threads get distinct copies.
296c67d6573Sopenharmony_ci    #[test]
297c67d6573Sopenharmony_ci    fn thread_owner_optimization() {
298c67d6573Sopenharmony_ci        use std::cell::RefCell;
299c67d6573Sopenharmony_ci        use std::sync::Arc;
300c67d6573Sopenharmony_ci
301c67d6573Sopenharmony_ci        let pool: Arc<Pool<RefCell<Vec<char>>>> =
302c67d6573Sopenharmony_ci            Arc::new(Pool::new(Box::new(|| RefCell::new(vec!['a']))));
303c67d6573Sopenharmony_ci        pool.get().value().borrow_mut().push('x');
304c67d6573Sopenharmony_ci
305c67d6573Sopenharmony_ci        let pool1 = pool.clone();
306c67d6573Sopenharmony_ci        let t1 = std::thread::spawn(move || {
307c67d6573Sopenharmony_ci            let guard = pool1.get();
308c67d6573Sopenharmony_ci            let v = guard.value();
309c67d6573Sopenharmony_ci            v.borrow_mut().push('y');
310c67d6573Sopenharmony_ci        });
311c67d6573Sopenharmony_ci
312c67d6573Sopenharmony_ci        let pool2 = pool.clone();
313c67d6573Sopenharmony_ci        let t2 = std::thread::spawn(move || {
314c67d6573Sopenharmony_ci            let guard = pool2.get();
315c67d6573Sopenharmony_ci            let v = guard.value();
316c67d6573Sopenharmony_ci            v.borrow_mut().push('z');
317c67d6573Sopenharmony_ci        });
318c67d6573Sopenharmony_ci
319c67d6573Sopenharmony_ci        t1.join().unwrap();
320c67d6573Sopenharmony_ci        t2.join().unwrap();
321c67d6573Sopenharmony_ci
322c67d6573Sopenharmony_ci        // If we didn't implement the single owner optimization, then one of
323c67d6573Sopenharmony_ci        // the threads above is likely to have mutated the [a, x] vec that
324c67d6573Sopenharmony_ci        // we stuffed in the pool before spawning the threads. But since
325c67d6573Sopenharmony_ci        // neither thread was first to access the pool, and because of the
326c67d6573Sopenharmony_ci        // optimization, we should be guaranteed that neither thread mutates
327c67d6573Sopenharmony_ci        // the special owned pool value.
328c67d6573Sopenharmony_ci        //
329c67d6573Sopenharmony_ci        // (Technically this is an implementation detail and not a contract of
330c67d6573Sopenharmony_ci        // Pool's API.)
331c67d6573Sopenharmony_ci        assert_eq!(vec!['a', 'x'], *pool.get().value().borrow());
332c67d6573Sopenharmony_ci    }
333c67d6573Sopenharmony_ci}
334