On Tue, Oct 27, 2009 at 3:11 PM, Zack Rusin <za...@vmware.com> wrote:
> Younes,
>
> sorry for the slow response, we've been a bit busy lately.
> I can't say that I'm a huge fan of this patch. I've been thinking about it
> lately and essentially I'd like to reiterate what I said initially about the
> pipe_video_context. While I think it's a great idea, I'm not too excited about
> the way it looks right now. It boils down to three problems for me:
>
> 1) The interface itself. In particular:
>
> void (*clear_surface)(struct pipe_video_context *vpipe,
>                         unsigned x, unsigned y,
>                         unsigned width, unsigned height,
>                         unsigned value,
>                         struct pipe_surface *surface);
> is problematic for two reasons: a) because it's something that looks more like
> a pipe_context method in itself, b) because it's a combination of a
> surface_fill and clear interface which we moved away from in master

Thanks for the comments. I've changed this slightly in my tree to be
::surface_fill() and ::surface_copy() just as in pipe_context. The way
I see it in a video API implementation the client won't have a
pipe_context, so pipe_video_context has to provide these methods so
you can 1) initialize video surfaces (to black, in cases where the
video doesn't actually fill the entire frame) and 2) to copy the
surfaces to the front buffer. Unlike pipe_context these must be
provided because I don't expect surfaces that can't be rendered to to
exist in this context, the only surfaces that should exist are color
buffers. If someone wants/needs to implement these with their
pipe_context then they can, but if their video HW can handle it then
there's no need to initialize a pipe_context at all.

> void (*render_picture)(struct pipe_video_context     *vpipe,
>                          /*struct pipe_surface         *backround,
>                          struct pipe_video_rect        *backround_area,*/
>                          struct pipe_video_surface     *src_surface,
>                          enum pipe_mpeg12_picture_type picture_type,
>                          /*unsigned                    num_past_surfaces,
>                          struct pipe_video_surface     *past_surfaces,
>                          unsigned                      num_future_surfaces,
>                          struct pipe_video_surface     *future_surfaces,*/
>                          struct pipe_video_rect        *src_area,
>                          struct pipe_surface           *dst_surface,
>                          struct pipe_video_rect        *dst_area,
>                          /*unsigned                      num_layers,
>                          struct pipe_texture           *layers,
>                          struct pipe_video_rect        *layer_src_areas,
>                          struct pipe_video_rect        *layer_dst_areas,*/
>                          struct pipe_fence_handle      **fence);
>
> has simply way too many arguments, not to mention that over a half is
> commented out. It's really a common problem for the entire interface, methods
> are very complex.

Well, because they're commented out we haven't really committed to
anything yet, I just had it in there as a reminder of what needs to be
supported for VDPAU (if you remove the comments this function maps
one-to-one with the VDPAU function that does the same thing). I can
implement it as settable states when the time comes if that's what
people prefer.

Gallium deals with it with settable state. Which  brings us
> to another problem with the interface:
>
> struct pipe_video_context
> {
>   struct pipe_screen *screen;
>   enum pipe_video_profile profile;
>   enum pipe_video_chroma_format chroma_format;
>   unsigned width;
>   unsigned height;
>        ... methods follow...
>
> which means that the interface is both an interface and a state.
> All of it is very un-gallium like.

However this isn't mutable state, they're effectively constants; once
you create a pipe_video_context with a set of parameters you get a
context that can only handle what you specified and everything else
fails, so they're there so it can easily be determined what this video
context handles. They could be behind getter functions if that's what
people prefer, but I'm the only one who uses this stuff so I apologize
if my personal habits sometimes sneak in...

> 2) We really need a real world video api implemented with the interface before
> making it public. So it should be proven that openmax or vdpau can actually be
> implemented using the interface before making it public.

Fair enough.

> 3) There's no hardware implementation for the interface and as far as i can
> see there's no plans for one. It's really what the interfaces are for, until
> we have people actually working on a hardware implementation there's no reason
> for this to be a Gallium interface at all.
>
> That's why i think the whole code should be an auxiliary module in which case
> patches like this one wouldn't be even necessary.

Well, I think the interface has to come before the implementation.
Another Nouveau contributor has some NV40 HW decoder code that's been
sitting around for a while.

Anyway, a state tracker was how it was implemented before
pipe_video_context; VDPAU and HW implementations are what prompted me
to find another way to do it because writing any more code as an
auxiliary module would just add to the pile of code that would later
have to be ported. At the time Keith even gave me some suggestions to
that end so you'll have to excuse me for going in this direction, I
didn't really hear any objections at the time.

> When it comes to interfaces it's a lot harder to remove/significantly change 
> an
> interface than to add a new one, so we should be extremely careful when adding
> interfaces and at least try to make sure that for all #1, #2 and #3 are
> fixable.

Fair enough, I'm all ears when it comes to suggestions but aside from
Cooper on r300g there aren't many others who touch/are affected by
this so I don't get a lot of feedback except when I have to break
things.

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to