Lines Matching refs:tail
9 * After the application reads the CQ ring tail, it must use an
11 * before writing the tail (using smp_load_acquire to read the tail will
19 * writing the SQ tail (ordering SQ entry stores with the tail store),
21 * to store the tail will do). And it needs a barrier ordering the SQ
27 * updating the SQ tail; a full memory barrier smp_mb() is needed
191 return READ_ONCE(ctx->rings->cq.tail) - READ_ONCE(ctx->rings->cq.head);
843 /* userspace may cheat modifying the tail, be safe and do min */
1636 u32 tail = ctx->cached_cq_tail;
1647 if (tail != ctx->cached_cq_tail ||
2371 * The cached sq head (or cq tail) serves two purposes:
2395 /* make sure SQ entry isn't read before tail */
2453 int dist = READ_ONCE(ctx->rings->cq.tail) - (int) iowq->cq_tail;
2586 int nr_wait = (int) iowq.cq_tail - READ_ONCE(ctx->rings->cq.tail);
2644 return READ_ONCE(rings->cq.head) == READ_ONCE(rings->cq.tail) ? ret : 0;
3949 p->sq_off.tail = offsetof(struct io_rings, sq.tail);
3961 p->cq_off.tail = offsetof(struct io_rings, cq.tail);
4641 offsetof(struct io_uring_buf_ring, tail));