On Sun, 2010-02-28 at 21:35 -0800, Corbin Simpson wrote:
> On Sun, Feb 28, 2010 at 9:15 PM, Dave Airlie <airl...@gmail.com> wrote:
> > On Mon, Mar 1, 2010 at 12:43 PM, Joakim Sindholt <b...@zhasha.com> wrote:
> >> On Sun, 2010-02-28 at 20:25 +0100, Jerome Glisse wrote:
> >>> Hi,
> >>>
> >>> I am a bit puzzled, how a pipe driver should handle
> >>> draw callback failure ? On radeon (pretty sure nouveau
> >>> or intel hit the same issue) we can only know when one
> >>> of the draw_* context callback is call if we can do
> >>> the rendering or not.
> >>>
> >>> The failure here is dictated by memory constraint, ie
> >>> if user bind big texture, big vbo ... we might not have
> >>> enough GPU address space to bind all the desired object
> >>> (even for drawing a single triangle) ?
> >>>
> >>> What should we do ? None of the draw callback can return
> >>> a value ? Maybe for a GL stack tracker we should report
> >>> GL_OUT_OF_MEMORY all way up to app ? Anyway bottom line
> >>> is i think pipe driver are missing something here. Any
> >>> idea ? Thought ? Is there already a plan to address that ? :)
> >>>
> >>> Cheers,
> >>> Jerome
> >>
> >> I think a vital point you're missing is: do we even care? If rendering
> >> fails because we simply can't render any more, do we even want to fall
> >> back? I can see a point in having a cap on how large a buffer can be
> >> rendered but apart from that, I'm not sure there even is a problem.
> >>
> >
> > Welcome to GL. If I have a 32MB graphics card, and I advertise
> > a maximum texture size of 4096x4096 + cubemapping + 3D textures,
> > there is no nice way for the app to get a clue about what it can legally
> > ask me to do. Old DRI drivers used to either use texmem which would
> > try and scale the limits etc to what it could legally fit in the
> > memory available,
> > or with bufmgr drivers they would check against a limit from the kernel,
> > and in both cases sw fallback if necessary. Gallium seemingly can't do this,
> > maybe its okay to ignore it but it wasn't an option when we did the
> > old DRI drivers.
> 
> GL_ATI_meminfo is unfortunately the best bet. :C
> 
> Also Gallium's API is written so that drivers must never fail on
> render calls. This is *incredibly* lame but there's nothing that can
> be done. Every single driver is currently encouraged to just drop shit
> on the floor if e.g. u_trim_pipe_prim fails, and every driver is
> encouraged to call u_trim_pipe_prim, so we have stupidity like: if
> (!u_trim_pipe_prim(mode, &count)) { return; }
> 
> In EVERY SINGLE DRIVER. Most uncool. What's the point of a unified API
> if it can't do sanity checks? >:T

I don't see what sanity checking has to do with the topic of failing
draw calls.

Would 

 (!u_trim_pipe_prim(mode, &count)) { return FALSE; }

make you any happier?

I think we all agree sanity checking should be done by the state
trackers.  You're just confusing the result of common practices of
cut'n'pasting code and working around third-party problems in their code
with the encouraged design principles.  I'm sure a patch to fix this
would be most welcome.

Jose


------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to