Lines Matching defs:request
72 * may be freed when the request is no longer in use by the GPU.
127 * The request is put onto a RCU freelist (i.e. the address
137 * Keep one request on each engine for reserved use under mempressure.
147 * Since the request must have been executed to be have completed,
157 * For example, consider the flow of a bonded request through a virtual
158 * engine. The request is created with a wide engine mask (all engines
159 * that we might execute on). On processing the bond, the request mask
160 * is reduced to one or more engines. If the request is subsequently
236 * @rq: request to inspect
239 * Fills the currently active engine to the @active pointer if the request
242 * Returns true if request was active or false otherwise.
253 * is-banned?, or we know the request is already inflight.
371 * We know the GPU must have read the request to have
373 * of tail of the request to update the last known position
376 * Note this requires that we are always called in request
398 * request that we have removed from the HW and put back on a run
401 * As we set I915_FENCE_FLAG_ACTIVE on the request, this should be
403 * inadvertently attach the breadcrumb to a completed request.
448 * Even if we have unwound the request, it may still be on
449 * the GPU (preempt-to-busy). If that request is inside an
451 * GPU functions may even be stuck waiting for the paired request
456 * requests, we know that only the currently executing request
459 * which request is currently active and so maybe stuck, as
528 * request (then flush the execute_cb). So by registering the
530 * the completed/retired request.
563 * As this request likely depends on state from the lost
596 /* As soon as the request is completed, it may be retired */
605 bool __i915_request_submit(struct i915_request *request)
607 struct intel_engine_cs *engine = request->engine;
610 RQ_TRACE(request, "\n");
619 * resubmission of that completed request, we can skip
621 * the request.
623 * We must remove the request from the caller's priority queue,
624 * and the caller must only call us when the request is in their
626 * request has *not* yet been retired and we can safely move
627 * the request into the engine->active.list where it will be
629 * request, this would be a horrible use-after-free.)
631 if (__i915_request_is_complete(request)) {
632 list_del_init(&request->sched.link);
636 if (unlikely(!intel_context_is_schedulable(request->context)))
637 i915_request_set_error_once(request, -EIO);
639 if (unlikely(fatal_error(request->fence.error)))
640 __i915_request_skip(request);
651 * If we installed a semaphore on this request and we only submit
652 * the request after the signaler completed, that indicates the
658 if (request->sched.semaphores &&
659 i915_sw_fence_signaled(&request->semaphore))
660 engine->saturated |= request->sched.semaphores;
662 engine->emit_fini_breadcrumb(request,
663 request->ring->vaddr + request->postfix);
665 trace_i915_request_execute(request);
673 GEM_BUG_ON(test_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags));
674 engine->add_active_request(request);
676 clear_bit(I915_FENCE_FLAG_PQUEUE, &request->fence.flags);
677 set_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags);
689 __notify_execute_cb_irq(request);
692 if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &request->fence.flags))
693 i915_request_enable_breadcrumb(request);
698 void i915_request_submit(struct i915_request *request)
700 struct intel_engine_cs *engine = request->engine;
706 __i915_request_submit(request);
711 void __i915_request_unsubmit(struct i915_request *request)
713 struct intel_engine_cs *engine = request->engine;
719 RQ_TRACE(request, "\n");
727 * attach itself. We first mark the request as no longer active and
731 GEM_BUG_ON(!test_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags));
732 clear_bit_unlock(I915_FENCE_FLAG_ACTIVE, &request->fence.flags);
733 if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &request->fence.flags))
734 i915_request_cancel_breadcrumb(request);
737 if (request->sched.semaphores && __i915_request_has_started(request))
738 request->sched.semaphores = 0;
741 * We don't need to wake_up any waiters on request->execute, they
742 * will get woken by any other event or us re-adding this request
744 * should be quite adapt at finding that the request now has a new
749 void i915_request_unsubmit(struct i915_request *request)
751 struct intel_engine_cs *engine = request->engine;
757 __i915_request_unsubmit(request);
775 struct i915_request *request =
776 container_of(fence, typeof(*request), submit);
780 trace_i915_request_submit(request);
783 i915_request_set_error_once(request, fence->error);
785 __rq_arm_watchdog(request);
796 request->engine->submit_request(request);
801 i915_request_put(request);
853 /* Move our oldest request to the slab-cache (if not in use!) */
911 * race with the request being allocated from the slab freelist.
912 * That is the request we are writing to here, may be in the process
915 * the RCU lookup, we change chase the request->engine pointer,
916 * read the request->global_seqno and increment the reference count.
919 * the lookup knows the request is unallocated and complete. Otherwise,
922 * check that the request we have a reference to and matches the active
923 * request.
925 * Before we increment the refcount, we chase the request->engine
928 * we see the request is completed (based on the value of the
930 * If we decide the request is not completed (new engine or seqno),
932 * active request - which it won't be and restart the lookup.
982 * eventually emit this request. This is to guarantee that the
984 * to be redone if the request is not actually submitted straight
988 * we need to double our request to ensure that if we need to wrap
996 * Record the position of the start of the request so that
998 * GPU processing the request, we never over-estimate the
1038 /* Move our oldest request to the slab-cache (if not in use!) */
1045 intel_context_exit(ce); /* active reference transferred to request */
1049 /* Check that we do not interrupt ourselves with a new request */
1075 * We do not hold a reference to the request before @signal, and
1089 /* Is signal the earliest request on its timeline? */
1094 * Peek at the request before us in the timeline. That
1095 * request will only be valid before it is retired, so
1226 /* Just emit the first semaphore we see as request space is limited. */
1282 * Wait until the start of this request.
1284 * The execution cb fires when we submit the request to HW. But in
1285 * many cases this may be long before the request itself is ready to
1287 * the request of interest is behind an indefinite spinner). So we hook
1341 * fatal errors we want to scrub the request before it is executed,
1342 * which means that we cannot preload the request onto HW and have
1458 * as it may then bypass the virtual request.
1562 * i915_request_await_deps - set this request to (async) wait upon a struct
1564 * @rq: request we are wishing to use
1583 * i915_request_await_object - set this request to (async) wait upon a bo
1584 * @to: request we are wishing to use
1593 * - If there is an outstanding write request to the object, the new
1594 * request must wait for it to complete (either CPU or in hw, requests
1597 * - If we are a write request (pending_write_domain is set), the new
1598 * request must wait for outstanding read requests to complete.
1663 * __i915_active_fence_set() to the returned request
1736 * Dependency tracking and request ordering along the timeline
1738 * operations while building the request (we know that the timeline
1742 * we embed the hooks into our request struct -- at the cost of
1750 * to prevent scheduling of the second request until the first is
1759 * timeline we store a pointer to last request submitted in the
1761 * between that request and request passed into this function or
1773 * Make sure that no request gazumped us - if it was allocated after
1784 * request is not being tracked for completion but the work itself is
1807 * GPU processing the request, we never over-estimate the
1827 * Let the backend know a new request has arrived that may need
1829 * request - i.e. we may want to preempt the current request in order
1831 * request.
1833 * This is called before the request is ready to run so that we can
1908 * Only wait for the request if we know it is likely to complete.
1911 * request length, so we do not have a good indicator that this
1912 * request will complete within the timeout. What we do know is the
1914 * tell if the request has been started. If the request is not even
1925 * rate. By busywaiting on the request completion for a short while we
1927 * if it is a slow request, we want to sleep as quickly as possible.
1929 * takes to sleep on a request, on the order of a microsecond.
1963 * i915_request_wait_timeout - wait until execution of request has finished
1964 * @rq: the request to wait upon
1968 * i915_request_wait_timeout() waits for the request to be completed, for a
1972 * Returns the remaining time (in jiffies) if the request completed, which may
1973 * be zero if the request is unfinished after the timeout expires.
1977 * pending before the request completes.
2018 * short wait, we first spin to see if the request would have completed
2028 * completion. That requires having a good predictor for the request
2055 * Flush the submission tasklet, but only if it may help this request.
2060 * is a chance it may submit this request. If the request is not ready
2103 * i915_request_wait - wait until execution of request has finished
2104 * @rq: the request to wait upon
2108 * i915_request_wait() waits for the request to be completed, for a
2112 * Returns the remaining time (in jiffies) if the request completed, which may
2113 * be zero or -ETIME if the request is unfinished after the timeout expires.
2115 * pending before the request completes.
2199 * - the request is not ready for execution as it is waiting
2203 * - all fences the request was waiting on have been signaled,
2204 * and the request is now ready for execution and will be
2207 * - a ready request may still need to wait on semaphores
2214 * - the request has been transferred from the backend queue and
2217 * - a completed request may still be regarded as executing, its