2009/3/31 Kristian Høgsberg <k...@bitplanet.net>:
> On Tue, Mar 31, 2009 at 4:46 AM, Alan Hourihane <al...@fairlite.co.uk> wrote:
>> On Tue, 2009-03-31 at 15:43 +1000, Dave Airlie wrote:
>>> So I've been playing a bit more with DRI2 and I'm having trouble
>>> finding how the buffer creation was meant to work for depth buffers.
>>>
>>> If my app uses a visual
>>> 0xbc 24 tc  0 24  0 r  .  .  8  8  8  0  0 16  0  0  0  0  0  0 0 None
>>>
>>> which is 24-bits + 16-bit depth, I don't have enough information in
>>> the DDX to create a depth buffer with a cpp of 2, the DDX
>>> can only see the drawable information it knows nothing about the visual.
>>>
>>> Now it goes and creates a set of 4 buffers to give back to Mesa, which
>>> then goes and takes the cpp of the depth buffer as 4
>>> when clearly it would need to be 2, then bad things happen.
>>>
>>> So should I just be creating the depth buffer in Mesa, does the DDX
>>> need to know about it all really...
>>
>> Yep, go create the depth buffer in Mesa. The DDX doesn't really need to
>> know about it.
>
> Creating the depth buffer and the other aux buffers through the X
> server is more complicated, but it was done that way for a reason.
> Two different client can render to the same GLX drawable and in that
> case they need to share the aux buffers for the drawable.  I
> considered letting the DRI driver create the buffers, but in that case
> it needs to tell the X server about it, and then you get extra
> roundtrips and races between DRI clients to create the buffers.  So
> creating them in the X server is the simplest solution, given what we
> have to support.
>
> As it is, the hw specific part of X creates the buffers from the
> tokens passed in by the DRI driver and can implement whichever
> convention the DRI driver expects.  For example, for intel, if both
> depth and stencil are requested, the DDX driver knows to only allocate
> one BO for the two buffers and the DRI driver expects this.  Likewise,
> if radeon expects a 16 bit depth buffer when there is no stencil,
> that's the behaviour the DDX should  implement.   If a situation
> arises where a combination of buffers required for a visual doesn't
> uniquely imply which buffer sizes are expected (say, a fbconfig
> without stencil could use either a 16i bit or a 32 bit depth buffer),
> we need to introduce new DRI2 buffer tokens along the lines of Depth16
> and Depth32 so the DRI driver can communicate which one it wants.
>

But you don't give the DDX enough information to make this decision,

I get the drawable, an attachments list and a count, I don't get the
visual or fbconfig.

Now radeon isn't special Intel has the same bug. Most GPUs have 3 cases they
can deal with, z24s8, z24, z16. However if I pick a z16 visual there is no way
the DDX can differentiate this from a z24, all it gets is a drawable.
It then uses
CreatePixmap which creates a pixmap that is 2x too large, wasting VRAM, and
with the wrong bpp component.
What I've done now is intercept it on the mesa side and hack the bpp
down to 2, but
this still wastes memory and ignores the actual problem that the DDX
doesn't have
the info to make the correct decision.

Dave.




> cheers,
> Kristian
>

------------------------------------------------------------------------------
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to