Re: [Intel-gfx] [PATCH 08/13] drm/i915: Eliminate lots of iterations over the execobjects array

2017-04-10 Thread Chris Wilson
On Tue, Apr 04, 2017 at 05:57:34PM +0300, Joonas Lahtinen wrote:
> On ke, 2017-03-29 at 16:56 +0100, Chris Wilson wrote:
> > The major scaling bottleneck in execbuffer is the processing of the
> > execobjects. Creating an auxiliary list is inefficient when compared to
> > using the execobject array we already have allocated.
> > 
> > Reservation is then split into phases. As we lookup up the VMA, we
> > try and bind it back into active location. Only if that fails, do we add
> > it to the unbound list for phase 2. In phase 2, we try and add all those
> > objects that could not fit into their previous location, with fallback
> > to retrying all objects and evicting the VM in case of severe
> > fragmentation. (This is the same as before, except that phase 1 is now
> > done inline with looking up the VMA to avoid an iteration over the
> > execobject array. In the ideal case, we eliminate the separate reservation
> > phase). During the reservation phase, we only evict from the VM between
> > passes (rather than currently as we try to fit every new VMA). In
> > testing with Unreal Engine's Atlantis demo which stresses the eviction
> > logic on gen7 class hardware, this speed up the framerate by a factor of
> > 2.
> > 
> > The second loop amalgamation is between move_to_gpu and move_to_active.
> > As we always submit the request, even if incomplete, we can use the
> > current request to track active VMA as we perform the flushes and
> > synchronisation required.
> > 
> > The next big advancement is to avoid copying back to the user any
> > execobjects and relocations that are not changed.
> > 
> > v2: Add a Theory of Operation spiel.
> > v3: Fall back to slow relocations in preparation for flushing userptrs.
> > 
> > Signed-off-by: Chris Wilson 
> 
> 

> > +   if ((entry->flags & EXEC_OBJECT_SUPPORTS_48B_ADDRESS) == 0 &&
> > +   (vma->node.start + vma->node.size - 1) >> 32)
> 
> upper_32_bits for clarity?

Not sure. I'd rather keep it as is for the time being and think of a
macro for this and the one in i915_gem_gtt.c

> > +static void
> > +eb_pin_vma(struct i915_execbuffer *eb,
> > +      struct drm_i915_gem_exec_object2 *entry,
> > +      struct i915_vma *vma)
> > +{
> > +   u64 flags;
> > +
> > +   flags = vma->node.start;
> 
> I'd be more comfortable if some mask was applied here.
> 
> Or at least GEM_BUG_ON(flags & BAD_FLAGS);

BUILD_BUG_ON() already guards against the bits mixing.

> > +   if (entry->flags & EXEC_OBJECT_NEEDS_FENCE) {
> > +   ret = i915_vma_get_fence(vma);
> > +   if (ret)
> > +   return ret;
> > +
> > +   if (i915_vma_pin_fence(vma))
> > +   entry->flags |= __EXEC_OBJECT_HAS_FENCE;
> > +   }
> 
> Smells like duplicate code with eb_vma_pin.

Close, but the order is intentionally different. :|
Earlier we don't take the error immediately and only fail if the result
doesn't match our requirements. This time, where we are now forced to
bind the vma, we do want to double check each step and unwind.

> > +   return __get_user(c, end - 1);
> 
> What's the point in this final check?

There's no guarrantee that the loop triggered a read on each page, so we
have to do a second read on the last byte of the address range to be
sure.

> > +static void eb_export_fence(struct drm_i915_gem_object *obj,
> > +   struct drm_i915_gem_request *req,
> > +   unsigned int flags)
> > +{
> > +   struct reservation_object *resv = obj->resv;
> > +
> > +   /* Ignore errors from failing to allocate the new fence, we can't
> > +    * handle an error right now. Worst case should be missed
> > +    * synchronisation leading to rendering corruption.
> > +    */
> 
> Worthy DRM_DEBUG?

I think the oomkiller emanating from this spot will be instructive
enough. At some point in the future, when we start using ww_mutex for
serializing the objects between execbuf (rather than struct_mutex), we
should be able to do the reservation early and so catch an error before
we commit.

> > @@ -1155,10 +1524,33 @@ eb_move_to_gpu(struct i915_execbuffer *eb)
> >     }
> >  
> >     ret = i915_gem_request_await_object
> > -   (eb->request, obj, vma->exec_entry->flags & 
> > EXEC_OBJECT_WRITE);
> > +   (eb->request, obj, entry->flags & EXEC_OBJECT_WRITE);
> >     if (ret)
> >     return ret;
> > +
> > +skip_flushes:
> > +   obj->base.write_domain = 0;
> > +   if (entry->flags & EXEC_OBJECT_WRITE) {
> > +   obj->base.read_domains = 0;
> > +   if (!obj->cache_dirty && gpu_write_needs_clflush(obj))
> > +   obj->cache_dirty = true;
> > +   intel_fb_obj_invalidate(obj, ORIGIN_CS);
> > +   }
> > +   obj->base.read_domains |= I915_GEM_GPU_DOMAINS;
> > +
> > +   i915_vma_move_to_active(vma, eb->request, entry->flags);
> > +   

Re: [Intel-gfx] [PATCH 08/13] drm/i915: Eliminate lots of iterations over the execobjects array

2017-04-04 Thread Joonas Lahtinen
On ke, 2017-03-29 at 16:56 +0100, Chris Wilson wrote:
> The major scaling bottleneck in execbuffer is the processing of the
> execobjects. Creating an auxiliary list is inefficient when compared to
> using the execobject array we already have allocated.
> 
> Reservation is then split into phases. As we lookup up the VMA, we
> try and bind it back into active location. Only if that fails, do we add
> it to the unbound list for phase 2. In phase 2, we try and add all those
> objects that could not fit into their previous location, with fallback
> to retrying all objects and evicting the VM in case of severe
> fragmentation. (This is the same as before, except that phase 1 is now
> done inline with looking up the VMA to avoid an iteration over the
> execobject array. In the ideal case, we eliminate the separate reservation
> phase). During the reservation phase, we only evict from the VM between
> passes (rather than currently as we try to fit every new VMA). In
> testing with Unreal Engine's Atlantis demo which stresses the eviction
> logic on gen7 class hardware, this speed up the framerate by a factor of
> 2.
> 
> The second loop amalgamation is between move_to_gpu and move_to_active.
> As we always submit the request, even if incomplete, we can use the
> current request to track active VMA as we perform the flushes and
> synchronisation required.
> 
> The next big advancement is to avoid copying back to the user any
> execobjects and relocations that are not changed.
> 
> v2: Add a Theory of Operation spiel.
> v3: Fall back to slow relocations in preparation for flushing userptrs.
> 
> Signed-off-by: Chris Wilson 



>  struct i915_execbuffer {
>   struct drm_i915_private *i915;
>   struct drm_file *file;
> @@ -63,19 +180,24 @@ struct i915_execbuffer {
>   struct i915_address_space *vm;
>   struct i915_vma *batch;
>   struct drm_i915_gem_request *request;
> - u32 batch_start_offset;
> - u32 batch_len;
> - unsigned int dispatch_flags;
> - struct drm_i915_gem_exec_object2 shadow_exec_entry;
> - bool need_relocs;
> - struct list_head vmas;
> + unsigned int buffer_count;
> + struct list_head unbound;
> + struct list_head relocs;
>   struct reloc_cache {
>   struct drm_mm_node node;
>   unsigned long vaddr;
>   unsigned int page;
>   bool use_64bit_reloc : 1;
> + bool has_llc : 1;
> + bool has_fence : 1;
> + bool needs_unfenced : 1;
>   } reloc_cache;
> - int lut_mask;
> + u64 invalid_flags;
> + u32 context_flags;
> + u32 dispatch_flags;
> + u32 batch_start_offset;
> + u32 batch_len;
> + int lut_size;
>   struct hlist_head *buckets;

Please document (new) members.

> +static inline u64 gen8_noncanonical_addr(u64 address)
> +{
> + return address & ((1ULL << (GEN8_HIGH_ADDRESS_BIT + 1)) - 1);

GENMASK_ULL

> @@ -106,71 +256,302 @@ eb_create(struct i915_execbuffer *eb)
>   return -ENOMEM;
>   }
>  
> - eb->lut_mask = size;
> + eb->lut_size = size;
>   } else {
> - eb->lut_mask = -eb->args->buffer_count;
> + eb->lut_size = -eb->buffer_count;

Document the negative meaning in the doc mentioned above.

> +static bool
> +eb_vma_misplaced(const struct drm_i915_gem_exec_object2 *entry,
> +  const struct i915_vma *vma)
> +{
> + if ((entry->flags & __EXEC_OBJECT_HAS_PIN) == 0)

Lets try to stick to one convention, we gave up == NULL too, so
!(a & FOO).



> + if ((entry->flags & EXEC_OBJECT_SUPPORTS_48B_ADDRESS) == 0 &&
> + (vma->node.start + vma->node.size - 1) >> 32)

upper_32_bits for clarity?

> +static void
> +eb_pin_vma(struct i915_execbuffer *eb,
> +    struct drm_i915_gem_exec_object2 *entry,
> +    struct i915_vma *vma)
> +{
> + u64 flags;
> +
> + flags = vma->node.start;

I'd be more comfortable if some mask was applied here.

Or at least GEM_BUG_ON(flags & BAD_FLAGS);

> +static inline void
> +eb_unreserve_vma(struct i915_vma *vma,
> +  struct drm_i915_gem_exec_object2 *entry)
>  {
> - struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
> -
> - __eb_unreserve_vma(vma, entry);
> - entry->flags &= ~(__EXEC_OBJECT_HAS_FENCE | __EXEC_OBJECT_HAS_PIN);
> + if (entry->flags & __EXEC_OBJECT_HAS_PIN) {

I'd treat the if more as an early return, but I guess you have more
code coming.

> +static int
> +eb_add_vma(struct i915_execbuffer *eb,
> > +      struct drm_i915_gem_exec_object2 *entry,
> > +      struct i915_vma *vma)
>  {
> > -   struct i915_vma *vma;
> > +   int ret;
>  
> > -   list_for_each_entry(vma, >vmas, exec_link) {
> > -   eb_unreserve_vma(vma);
> > -   i915_vma_put(vma);
> > -   vma->exec_entry = NULL;
> > +   GEM_BUG_ON(i915_vma_is_closed(vma));
> +
> + if ((eb->args->flags & __EXEC_VALIDATED) == 0) {

[Intel-gfx] [PATCH 08/13] drm/i915: Eliminate lots of iterations over the execobjects array

2017-03-29 Thread Chris Wilson
The major scaling bottleneck in execbuffer is the processing of the
execobjects. Creating an auxiliary list is inefficient when compared to
using the execobject array we already have allocated.

Reservation is then split into phases. As we lookup up the VMA, we
try and bind it back into active location. Only if that fails, do we add
it to the unbound list for phase 2. In phase 2, we try and add all those
objects that could not fit into their previous location, with fallback
to retrying all objects and evicting the VM in case of severe
fragmentation. (This is the same as before, except that phase 1 is now
done inline with looking up the VMA to avoid an iteration over the
execobject array. In the ideal case, we eliminate the separate reservation
phase). During the reservation phase, we only evict from the VM between
passes (rather than currently as we try to fit every new VMA). In
testing with Unreal Engine's Atlantis demo which stresses the eviction
logic on gen7 class hardware, this speed up the framerate by a factor of
2.

The second loop amalgamation is between move_to_gpu and move_to_active.
As we always submit the request, even if incomplete, we can use the
current request to track active VMA as we perform the flushes and
synchronisation required.

The next big advancement is to avoid copying back to the user any
execobjects and relocations that are not changed.

v2: Add a Theory of Operation spiel.
v3: Fall back to slow relocations in preparation for flushing userptrs.

Signed-off-by: Chris Wilson 
---
 drivers/gpu/drm/i915/i915_drv.h |2 +-
 drivers/gpu/drm/i915/i915_gem_evict.c   |   95 +-
 drivers/gpu/drm/i915/i915_gem_execbuffer.c  | 1713 +--
 drivers/gpu/drm/i915/i915_vma.c |2 +-
 drivers/gpu/drm/i915/i915_vma.h |1 +
 drivers/gpu/drm/i915/selftests/i915_gem_evict.c |4 +-
 drivers/gpu/drm/i915/selftests/i915_vma.c   |   16 +-
 7 files changed, 1043 insertions(+), 790 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index e3db344243f1..5e434fad2d78 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3549,7 +3549,7 @@ int __must_check i915_gem_evict_something(struct 
i915_address_space *vm,
 int __must_check i915_gem_evict_for_node(struct i915_address_space *vm,
 struct drm_mm_node *node,
 unsigned int flags);
-int i915_gem_evict_vm(struct i915_address_space *vm, bool do_idle);
+int i915_gem_evict_vm(struct i915_address_space *vm);
 
 /* belongs in i915_gem_gtt.h */
 static inline void i915_gem_chipset_flush(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c 
b/drivers/gpu/drm/i915/i915_gem_evict.c
index fd5c75517143..c45d58e8a53d 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -50,6 +50,30 @@ static bool ggtt_is_idle(struct drm_i915_private *dev_priv)
return true;
 }
 
+static int ggtt_flush(struct drm_i915_private *i915)
+{
+   int err;
+
+   /* Not everything in the GGTT is tracked via vma (otherwise we
+* could evict as required with minimal stalling) so we are forced
+* to idle the GPU and explicitly retire outstanding requests in
+* the hopes that we can then remove contexts and the like only
+* bound by their active reference.
+*/
+   err = i915_gem_switch_to_kernel_context(i915);
+   if (err)
+   return err;
+
+   err = i915_gem_wait_for_idle(i915,
+I915_WAIT_INTERRUPTIBLE |
+I915_WAIT_LOCKED);
+   if (err)
+   return err;
+
+   i915_gem_retire_requests(i915);
+   return 0;
+}
+
 static bool
 mark_free(struct drm_mm_scan *scan,
  struct i915_vma *vma,
@@ -175,23 +199,10 @@ i915_gem_evict_something(struct i915_address_space *vm,
return intel_has_pending_fb_unpin(dev_priv) ? -EAGAIN : -ENOSPC;
}
 
-   /* Not everything in the GGTT is tracked via vma (otherwise we
-* could evict as required with minimal stalling) so we are forced
-* to idle the GPU and explicitly retire outstanding requests in
-* the hopes that we can then remove contexts and the like only
-* bound by their active reference.
-*/
-   ret = i915_gem_switch_to_kernel_context(dev_priv);
-   if (ret)
-   return ret;
-
-   ret = i915_gem_wait_for_idle(dev_priv,
-I915_WAIT_INTERRUPTIBLE |
-I915_WAIT_LOCKED);
+   ret = ggtt_flush(dev_priv);
if (ret)
return ret;
 
-   i915_gem_retire_requests(dev_priv);
goto search_again;
 
 found:
@@ -338,10 +349,8 @@ int i915_gem_evict_for_node(struct