Re: DRI2 + buffer creation

2009-04-01 Thread Stefan Dösinger
Am Dienstag, 31. März 2009 22:03:45 schrieb Ian Romanick:
 Errthis is a bandaid on a bigger problem.  What about 24-bit depth
 with 16-bit color?  What about when we start to support multisampling?
 Etc.  If the DDX needs the fbconfig information, why not just give it
 the fbconfig?  Right?
Some cards even support floating point depth buffers afaik.

--
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRI2 + buffer creation

2009-03-31 Thread Alan Hourihane
On Tue, 2009-03-31 at 15:43 +1000, Dave Airlie wrote:
 So I've been playing a bit more with DRI2 and I'm having trouble
 finding how the buffer creation was meant to work for depth buffers.
 
 If my app uses a visual
 0xbc 24 tc  0 24  0 r  .  .  8  8  8  0  0 16  0  0  0  0  0  0 0 None
 
 which is 24-bits + 16-bit depth, I don't have enough information in
 the DDX to create a depth buffer with a cpp of 2, the DDX
 can only see the drawable information it knows nothing about the visual.
 
 Now it goes and creates a set of 4 buffers to give back to Mesa, which
 then goes and takes the cpp of the depth buffer as 4
 when clearly it would need to be 2, then bad things happen.
 
 So should I just be creating the depth buffer in Mesa, does the DDX
 need to know about it all really...

Yep, go create the depth buffer in Mesa. The DDX doesn't really need to
know about it.

Alan.


--
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRI2 + buffer creation

2009-03-31 Thread Kristian Høgsberg
On Tue, Mar 31, 2009 at 4:46 AM, Alan Hourihane al...@fairlite.co.uk wrote:
 On Tue, 2009-03-31 at 15:43 +1000, Dave Airlie wrote:
 So I've been playing a bit more with DRI2 and I'm having trouble
 finding how the buffer creation was meant to work for depth buffers.

 If my app uses a visual
 0xbc 24 tc  0 24  0 r  .  .  8  8  8  0  0 16  0  0  0  0  0  0 0 None

 which is 24-bits + 16-bit depth, I don't have enough information in
 the DDX to create a depth buffer with a cpp of 2, the DDX
 can only see the drawable information it knows nothing about the visual.

 Now it goes and creates a set of 4 buffers to give back to Mesa, which
 then goes and takes the cpp of the depth buffer as 4
 when clearly it would need to be 2, then bad things happen.

 So should I just be creating the depth buffer in Mesa, does the DDX
 need to know about it all really...

 Yep, go create the depth buffer in Mesa. The DDX doesn't really need to
 know about it.

Creating the depth buffer and the other aux buffers through the X
server is more complicated, but it was done that way for a reason.
Two different client can render to the same GLX drawable and in that
case they need to share the aux buffers for the drawable.  I
considered letting the DRI driver create the buffers, but in that case
it needs to tell the X server about it, and then you get extra
roundtrips and races between DRI clients to create the buffers.  So
creating them in the X server is the simplest solution, given what we
have to support.

As it is, the hw specific part of X creates the buffers from the
tokens passed in by the DRI driver and can implement whichever
convention the DRI driver expects.  For example, for intel, if both
depth and stencil are requested, the DDX driver knows to only allocate
one BO for the two buffers and the DRI driver expects this.  Likewise,
if radeon expects a 16 bit depth buffer when there is no stencil,
that's the behaviour the DDX should  implement.   If a situation
arises where a combination of buffers required for a visual doesn't
uniquely imply which buffer sizes are expected (say, a fbconfig
without stencil could use either a 16 bit or a 32 bit depth buffer),
we need to introduce new DRI2 buffer tokens along the lines of Depth16
and Depth32 so the DRI driver can communicate which one it wants.

cheers,
Kristian

--
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRI2 + buffer creation

2009-03-31 Thread Dave Airlie
2009/3/31 Kristian Høgsberg k...@bitplanet.net:
 On Tue, Mar 31, 2009 at 4:46 AM, Alan Hourihane al...@fairlite.co.uk wrote:
 On Tue, 2009-03-31 at 15:43 +1000, Dave Airlie wrote:
 So I've been playing a bit more with DRI2 and I'm having trouble
 finding how the buffer creation was meant to work for depth buffers.

 If my app uses a visual
 0xbc 24 tc  0 24  0 r  .  .  8  8  8  0  0 16  0  0  0  0  0  0 0 None

 which is 24-bits + 16-bit depth, I don't have enough information in
 the DDX to create a depth buffer with a cpp of 2, the DDX
 can only see the drawable information it knows nothing about the visual.

 Now it goes and creates a set of 4 buffers to give back to Mesa, which
 then goes and takes the cpp of the depth buffer as 4
 when clearly it would need to be 2, then bad things happen.

 So should I just be creating the depth buffer in Mesa, does the DDX
 need to know about it all really...

 Yep, go create the depth buffer in Mesa. The DDX doesn't really need to
 know about it.

 Creating the depth buffer and the other aux buffers through the X
 server is more complicated, but it was done that way for a reason.
 Two different client can render to the same GLX drawable and in that
 case they need to share the aux buffers for the drawable.  I
 considered letting the DRI driver create the buffers, but in that case
 it needs to tell the X server about it, and then you get extra
 roundtrips and races between DRI clients to create the buffers.  So
 creating them in the X server is the simplest solution, given what we
 have to support.

 As it is, the hw specific part of X creates the buffers from the
 tokens passed in by the DRI driver and can implement whichever
 convention the DRI driver expects.  For example, for intel, if both
 depth and stencil are requested, the DDX driver knows to only allocate
 one BO for the two buffers and the DRI driver expects this.  Likewise,
 if radeon expects a 16 bit depth buffer when there is no stencil,
 that's the behaviour the DDX should  implement.   If a situation
 arises where a combination of buffers required for a visual doesn't
 uniquely imply which buffer sizes are expected (say, a fbconfig
 without stencil could use either a 16i bit or a 32 bit depth buffer),
 we need to introduce new DRI2 buffer tokens along the lines of Depth16
 and Depth32 so the DRI driver can communicate which one it wants.


But you don't give the DDX enough information to make this decision,

I get the drawable, an attachments list and a count, I don't get the
visual or fbconfig.

Now radeon isn't special Intel has the same bug. Most GPUs have 3 cases they
can deal with, z24s8, z24, z16. However if I pick a z16 visual there is no way
the DDX can differentiate this from a z24, all it gets is a drawable.
It then uses
CreatePixmap which creates a pixmap that is 2x too large, wasting VRAM, and
with the wrong bpp component.
What I've done now is intercept it on the mesa side and hack the bpp
down to 2, but
this still wastes memory and ignores the actual problem that the DDX
doesn't have
the info to make the correct decision.

Dave.




 cheers,
 Kristian


--
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRI2 + buffer creation

2009-03-31 Thread Kristian Høgsberg
2009/3/31 Dave Airlie airl...@gmail.com:
 2009/3/31 Kristian Høgsberg k...@bitplanet.net:
 On Tue, Mar 31, 2009 at 4:46 AM, Alan Hourihane al...@fairlite.co.uk wrote:
 On Tue, 2009-03-31 at 15:43 +1000, Dave Airlie wrote:
 So I've been playing a bit more with DRI2 and I'm having trouble
 finding how the buffer creation was meant to work for depth buffers.

 If my app uses a visual
 0xbc 24 tc  0 24  0 r  .  .  8  8  8  0  0 16  0  0  0  0  0  0 0 None

 which is 24-bits + 16-bit depth, I don't have enough information in
 the DDX to create a depth buffer with a cpp of 2, the DDX
 can only see the drawable information it knows nothing about the visual.

 Now it goes and creates a set of 4 buffers to give back to Mesa, which
 then goes and takes the cpp of the depth buffer as 4
 when clearly it would need to be 2, then bad things happen.

 So should I just be creating the depth buffer in Mesa, does the DDX
 need to know about it all really...

 Yep, go create the depth buffer in Mesa. The DDX doesn't really need to
 know about it.

 Creating the depth buffer and the other aux buffers through the X
 server is more complicated, but it was done that way for a reason.
 Two different client can render to the same GLX drawable and in that
 case they need to share the aux buffers for the drawable.  I
 considered letting the DRI driver create the buffers, but in that case
 it needs to tell the X server about it, and then you get extra
 roundtrips and races between DRI clients to create the buffers.  So
 creating them in the X server is the simplest solution, given what we
 have to support.

 As it is, the hw specific part of X creates the buffers from the
 tokens passed in by the DRI driver and can implement whichever
 convention the DRI driver expects.  For example, for intel, if both
 depth and stencil are requested, the DDX driver knows to only allocate
 one BO for the two buffers and the DRI driver expects this.  Likewise,
 if radeon expects a 16 bit depth buffer when there is no stencil,
 that's the behaviour the DDX should  implement.   If a situation
 arises where a combination of buffers required for a visual doesn't
 uniquely imply which buffer sizes are expected (say, a fbconfig
 without stencil could use either a 16i bit or a 32 bit depth buffer),
 we need to introduce new DRI2 buffer tokens along the lines of Depth16
 and Depth32 so the DRI driver can communicate which one it wants.


 But you don't give the DDX enough information to make this decision,

 I get the drawable, an attachments list and a count, I don't get the
 visual or fbconfig.

 Now radeon isn't special Intel has the same bug. Most GPUs have 3 cases they
 can deal with, z24s8, z24, z16. However if I pick a z16 visual there is no way
 the DDX can differentiate this from a z24, all it gets is a drawable.
 It then uses
 CreatePixmap which creates a pixmap that is 2x too large, wasting VRAM, and
 with the wrong bpp component.
 What I've done now is intercept it on the mesa side and hack the bpp
 down to 2, but
 this still wastes memory and ignores the actual problem that the DDX
 doesn't have
 the info to make the correct decision.

That's the case I was describing in the last sentence.  When the DDX
gets the set of buffers to allocate it doesn't know whether to
allocate 16 or 24 bit depth buffer.  What I'm suggesting is that we
add a new buffer token to the DRI2 protocol, DRI2BufferDepth16, which
the dri driver can use to indicate that it wants a 16 bit depth buffer
even if the drawable is 24 bpp.  It requires a dri2 proto bump, and
the loader needs to tell the dri driver that the DRI2BufferDepth16
token is available.

cheers,
Kristian

--
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRI2 + buffer creation

2009-03-31 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Kristian Høgsberg wrote:
 
 That's the case I was describing in the last sentence.  When the DDX
 gets the set of buffers to allocate it doesn't know whether to
 allocate 16 or 24 bit depth buffer.  What I'm suggesting is that we
 add a new buffer token to the DRI2 protocol, DRI2BufferDepth16, which
 the dri driver can use to indicate that it wants a 16 bit depth buffer
 even if the drawable is 24 bpp.  It requires a dri2 proto bump, and
 the loader needs to tell the dri driver that the DRI2BufferDepth16
 token is available.

Errthis is a bandaid on a bigger problem.  What about 24-bit depth
with 16-bit color?  What about when we start to support multisampling?
Etc.  If the DDX needs the fbconfig information, why not just give it
the fbconfig?  Right?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAknSdyEACgkQX1gOwKyEAw+9YwCfc44rwsm8kfapo/RSFL0KfhEd
68wAnjRK4Ng5bCpvqmnVHC0KqCWyOYll
=4J9j
-END PGP SIGNATURE-

--
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


DRI2 + buffer creation

2009-03-30 Thread Dave Airlie
So I've been playing a bit more with DRI2 and I'm having trouble
finding how the buffer creation was meant to work for depth buffers.

If my app uses a visual
0xbc 24 tc  0 24  0 r  .  .  8  8  8  0  0 16  0  0  0  0  0  0 0 None

which is 24-bits + 16-bit depth, I don't have enough information in
the DDX to create a depth buffer with a cpp of 2, the DDX
can only see the drawable information it knows nothing about the visual.

Now it goes and creates a set of 4 buffers to give back to Mesa, which
then goes and takes the cpp of the depth buffer as 4
when clearly it would need to be 2, then bad things happen.

So should I just be creating the depth buffer in Mesa, does the DDX
need to know about it all really...

Dave.

--
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel