All of that sounds easily done by changing 
reservation_object_wait_timeout_rcu() args wait_all to FALSE, intr to TRUE, and 
timeout to MAX_SCHEDULE_TIMEOUT.

Thanks,
Alex

-----Original Message-----
From: Daniel Vetter [mailto:daniel.vet...@ffwll.ch] On Behalf Of Daniel Vetter
Sent: Monday, November 16, 2015 7:16 AM
To: Alexander Goins
Cc: Chris Wilson; dri-devel at lists.freedesktop.org
Subject: Re: [PATCH i915 v2 1/2] i915: wait for fences in mmio_flip()

On Mon, Nov 09, 2015 at 09:42:05PM +0000, Alexander Goins wrote:
> >> +  /* For framebuffer backed by dmabuf, wait for fence */
> >> +  mutex_lock(&dev->object_name_lock);
> 
> >The lock here is unfortunate. I thought once a dmabuf as attached to an 
> >object, it persists until the object is destroyed, so afaict the lock here 
> >is unnecessary (as it only protects against a userspace race in attaching a 
> >dmabuf).
> 
> You're probably right. I'll send out a v3 patch set with the lock removed if 
> there are no other comments.
> 
> >> +  if (pending_flip_obj->base.dma_buf) {
> >> +          reservation_object_wait_timeout_rcu(
> 
> >Side-question, are these fences exclusive or do we track read/write?
> 
> They are exclusive, with the sink X driver using vblank events to 
> explicitly request the source X driver to write. If you guys want to 
> track read/write I could switch over to shared fences, but for my use 
> case only exclusive fences are necessary.

reservations track reads/writes and here we should only wait for the exclusive 
fence (i.e. pending writes) and not also for reads. Doing that would result in 
piles of unecessary stalls all over. We need a different wait_timeout call here.

Also fences should always eventually signal (worst case with a hangcheck 
fallback that force-signals everything that died in a gpu hang), so I don't 
think a timeout is the correct approach.

What we need otoh is that this needs to be interruptible (at least for the new 
atomic path).
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

Reply via email to