Kristian Høgsberg skrev: > --- > > Okay, here's the patch again with the locking fixed. Everything else is > the same. As for the potential problem with polling on the drm fd with > both dri1 and dri2 enabled, I suggest that we just disable the dri1 > sigio handler in the xserver. We'll do this in a commit that precedes the > merge of the page flipping code, so that no server will be able to poll > on the drm fd in both dri1 and dri2 code. > > cheers, > Kristian > <Grr>
1) The locking problems are still not fixed. See below. I just did a quick look, so the comment list is not complete. 2) The 64/32-bit EFAULT problems mentioned are ignored. See below. </Grr> Even if that is fixed, I believe this patch adds a framework that doesn't belong in a modern day graphical subsystem for the following reasons, many of them already mentioned, but let me reiterate. Although the dri1 compatibility issue will be fixed with your suggestion. a) It puts requirements on user-space for correct functionality of DRM ioctls. I'm talking about the master needing to parse the event queue. b) It requires the master to act as a scheduler, and circumvents the DRM command submission mechanism through the delayed unpin callback. If this is to workaround any inability of GEM to serve a command submission while a previous command submission is blocked in the kernel, then IMHO that should be fixed and not worked around. c) The implementation will mostly be worked around with capable schedulers and hardware. A couple of questions: Why are you guys so reluctant to use kernel scheduling for this instead of a mix of kernel / user-space scheduling? If the plan is to eliminate DRI2GetBuffers() once per frame, what will then be used to block clients rendering to the old back buffer? > drivers/gpu/drm/drm_crtc.c | 171 > ++++++++++++++++++++++++++++++- > drivers/gpu/drm/drm_crtc_helper.c | 10 ++ > drivers/gpu/drm/drm_drv.c | 1 + > drivers/gpu/drm/drm_fops.c | 68 ++++++++++++- > drivers/gpu/drm/drm_irq.c | 42 ++++++++ > drivers/gpu/drm/i915/i915_drv.c | 1 + > drivers/gpu/drm/i915/intel_display.c | 16 +++- > drivers/gpu/drm/radeon/radeon_display.c | 3 +- > include/drm/drm.h | 25 +++++ > include/drm/drmP.h | 32 ++++++ > include/drm/drm_crtc.h | 27 +++++ > include/drm/drm_crtc_helper.h | 4 + > include/drm/drm_mode.h | 16 +++ > 13 files changed, 409 insertions(+), 7 deletions(-) > > diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c > index 8fab789..d877c21 100644 > --- a/drivers/gpu/drm/drm_crtc.c > +++ b/drivers/gpu/drm/drm_crtc.c > @@ -34,6 +34,8 @@ > #include "drmP.h" > #include "drm_crtc.h" > > +#undef set_base > + > struct drm_prop_enum_list { > int type; > char *name; > @@ -342,6 +344,35 @@ void drm_framebuffer_cleanup(struct drm_framebuffer *fb) > EXPORT_SYMBOL(drm_framebuffer_cleanup); > > /** > + * drm_crtc_async_flip - do a set_base call from a work queue > + * @work: work struct > + * > + * Called when a set_base call is queued by the page flip code. This > + * allows the flip ioctl itself to return immediately and allow userspace > + * to continue working. > + */ > +static void drm_crtc_async_flip(struct work_struct *work) > +{ > + struct drm_crtc *crtc = container_of(work, struct drm_crtc, async_flip); > + struct drm_device *dev = crtc->dev; > + struct drm_pending_flip *pending; > + > + mutex_lock(&dev->mode_config.mutex); > + > + BUG_ON(crtc->pending_flip == NULL); > + > + crtc->funcs->set_base(crtc, crtc->x, crtc->y, NULL); > + > + pending = crtc->pending_flip; > + crtc->pending_flip = NULL; > + > + pending->frame = drm_vblank_count(dev, crtc->pipe); > What's protecting dev->flip_list here? > + list_add_tail(&pending->link, &dev->flip_list); > + > + mutex_unlock(&dev->mode_config.mutex); > +} > + > > + > > #define DRM_CORE_IOCTL_COUNT ARRAY_SIZE( drm_ioctls ) > diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c > index 251bc0e..dcd9c66 100644 > --- a/drivers/gpu/drm/drm_fops.c > +++ b/drivers/gpu/drm/drm_fops.c > @@ -257,6 +257,8 @@ static int drm_open_helper(struct inode *inode, struct > file *filp, > > INIT_LIST_HEAD(&priv->lhead); > INIT_LIST_HEAD(&priv->fbs); > + INIT_LIST_HEAD(&priv->event_list); > + init_waitqueue_head(&priv->event_wait); > > if (dev->driver->driver_features & DRIVER_GEM) > drm_gem_open(dev, priv); > @@ -429,6 +431,9 @@ int drm_release(struct inode *inode, struct file *filp) > { > struct drm_file *file_priv = filp->private_data; > struct drm_device *dev = file_priv->minor->dev; > + struct drm_pending_flip *f, *ft; > + struct drm_pending_event *e, *et; > + > int retcode = 0; > > lock_kernel(); > @@ -451,6 +456,19 @@ int drm_release(struct inode *inode, struct file *filp) > if (file_priv->minor->master) > drm_master_release(dev, filp); > > And here? What's the locking order between mode_config mutex and the struct_mutex? > + mutex_lock(&dev->struct_mutex); > + > + /* Remove pending flips */ > + list_for_each_entry_safe(f, ft, &dev->flip_list, link) > + if (f->pending_event.file_priv == file_priv) > + drm_finish_pending_flip(dev, f, 0); > + > + /* Remove unconsumed events */ > + list_for_each_entry_safe(e, et, &file_priv->event_list, link) > + e->destroy(e); > + > + mutex_unlock(&dev->struct_mutex); > + > if (dev->driver->driver_features & DRIVER_GEM) > drm_gem_release(dev, file_priv); > > @@ -544,9 +562,55 @@ int drm_release(struct inode *inode, struct file *filp) > } > EXPORT_SYMBOL(drm_release); > > -/** No-op. */ > +ssize_t drm_read(struct file *filp, char __user *buffer, > + size_t count, loff_t *offset) > +{ > + struct drm_file *file_priv = filp->private_data; > + struct drm_device *dev = file_priv->minor->dev; > + struct drm_pending_event *event; > + ssize_t total, ret; > + > + ret = wait_event_interruptible(file_priv->event_wait, > + !list_empty(&file_priv->event_list)); > + if (ret < 0) > + return ret; > + > + total = 0; > What's protecting file_priv->event_list here? > + while (!list_empty(&file_priv->event_list)) { > + mutex_lock(&dev->struct_mutex); > + event = list_first_entry(&file_priv->event_list, > + struct drm_pending_event, link); > + if (total + event->event->length > count) { > + mutex_unlock(&dev->struct_mutex); > + break; > + } > + list_del(&event->link); > + mutex_unlock(&dev->struct_mutex); > + > + if (copy_to_user(buffer + total, > + event->event, event->event->length)) { > + total = -EFAULT; > + break; > + } > + > + total += event->event->length; > + event->destroy(event); > + } > + > + return total; > +} > +EXPORT_SYMBOL(drm_read); > + > unsigned int drm_poll(struct file *filp, struct poll_table_struct *wait) > { > - return 0; > + struct drm_file *file_priv = filp->private_data; > + unsigned int mask = 0; > + > + poll_wait(filp, &file_priv->event_wait, wait); > + > And here? > + if (!list_empty(&file_priv->event_list)) > + mask |= POLLIN | POLLRDNORM; > + > + return mask; > } > EXPORT_SYMBOL(drm_poll); > diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c > index b4a3dbc..a0c120c 100644 > --- a/drivers/gpu/drm/drm_irq.c > +++ b/drivers/gpu/drm/drm_irq.c > @@ -34,6 +34,7 @@ > */ > > #include "drmP.h" > +#include "drm_crtc_helper.h" > > #include <linux/interrupt.h> /* For task queue support */ > > @@ -71,6 +72,44 @@ int drm_irq_by_busid(struct drm_device *dev, void *data, > return 0; > } > > +#define vblank_passed(a,b) ((long)(a - b) > 0) > + > +void drm_finish_pending_flip(struct drm_device *dev, > + struct drm_pending_flip *f, u32 frame) > +{ > + struct timeval now; > + > + f->event.frame = frame; > + do_gettimeofday(&now); > + f->event.tv_sec = now.tv_sec; > + f->event.tv_usec = now.tv_usec; > + drm_vblank_put(dev, f->pipe); > + list_del_init(&f->link); > What's protecting the file_priv->event_list here? > + list_add_tail(&f->pending_event.link, > + &f->pending_event.file_priv->event_list); > + if (f->old_fb) > + f->old_fb->funcs->unpin(f->old_fb); > + wake_up_interruptible(&f->pending_event.file_priv->event_wait); > +} > + > +/** > + * Header for events written back to userspace on the drm fd. The > + * type defines the type of event, the length specifies the total > + * length of the event (including the header), and user_data is > + * typically a 64 bit value passed with the ioctl that triggered the > + * event. A read on the drm fd will always only return complete > + * events, that is, if for example the read buffer is 100 bytes, and > + * there are two 64 byte events pending, only one will be returned. > + */ > +struct drm_event { > + __u32 type; > + __u32 length; > +}; > + > +#define DRM_EVENT_MODE_PAGE_FLIP 0x01 > What is user-space required to do if kernel- and user-space disagrees about the size of the event, as may be the case here: > + > +struct drm_event_page_flip { > + struct drm_event base; > + __u64 user_data; > + __u32 tv_sec; > + __u32 tv_usec; > + __u32 frame; > +}; > + > > The size of this struct differs in 64-bit and 32-bit mode. May cause EFAULTs. > +struct drm_mode_page_flip { > + /** Handle of new front buffer */ > + __u32 fb_id; > + __u32 crtc_id; > + > + /* 64 bit cookie returned to userspace in the page flip event. */ > + __u64 user_data; > + /** > + * page flip flags (wait on flip only for now) > + */ > + __u32 flags; > +}; > + > #endif > cheers, Thomas. . ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july -- _______________________________________________ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel