Re: [offtopic] Re: Proprosed break in libGL / DRI driver ABI

2005-04-07 Thread Allen Akin
On Thu, Apr 07, 2005 at 01:29:21PM -0400, Adam Jackson wrote:
| I would think it possible to decompose the scene along notional view volumes 
| in the tiler...

In general that doesn't work, because (a) primitives that are clipped
based on the raster position clip in their entirety, rather than just
clipping the fragments outside the view volume; (b) clipping
computations can yield different parameter values at the boundaries than
interpolation during rasterization would have yielded, leading to
texturing discontinuities or Mach banding.  (These can be really
dramatic if interpolation isn't perspective-correct.)

If interpolation is done in such a way that you can guarantee reasonable
correspondence between interpolated and clipped parameter values, you
might be able to get away with overlapped view volumes and blending to
minimize the seams.

I haven't looked at this stuff in years, so someone with more current
experience than mine might have a better idea.

| I don't know that the rasteriser is the right place to solve this problem.  

Well, it is true that we don't have any control over the hardware
rasterizers, so we can't depend on solving the whole problem there.
But I doubt it can be solved without work in the OpenGL implementation.

| Eventually X will outgrow the 15-bit coordinate limit, and/or Roland will 
| want to do 1440dpi on A4 or whatever.  So it's not about asking for 12 bits 
| of fragment precision as opposed to 11, it's about asking for infinite 
| precision.

I don't think it's quite that bad. :-)

But you do need to guarantee how parameters are interpolated.  And that
the OpenGL implementation allows parameters to be specified with enough
precision that higher-level libraries can tile the scene, while leaving
errors at the seams that are too small to see.  (For example, the OpenGL
implementation shouldn't represent colors with just 8 bits.)

Allen


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [offtopic] Re: Proprosed break in libGL / DRI driver ABI

2005-04-07 Thread Adam Jackson
On Thursday 07 April 2005 12:53, Allen Akin wrote:
> Tiling involves some subtle tradeoffs.  The OpenGL spec requires that
> the pixel fragments generated by rasterization be unaffected if
> primitives are translated by an integral amount in screen coordinates,
> or if scissoring is enabled.  Those two requirements are intended to
> make it possible to tile rendering without generating visible seams at
> the tile boundaries.  But the same requirements would have to be met by
> a higher-level utility if you want to do tiling there, and it may not be
> possible to meet them if the underlying OpenGL renderer doesn't offer
> enough coordinate and color (etc.) precision to the higher-level
> utility.

I would think it possible to decompose the scene along notional view volumes 
in the tiler, translating the coordinates of each vertex relative to the view 
volume.  The tiler wouldn't need excessive precision because it has the 
original coordinates in the final scene's space; and the rasteriser wouldn't 
need excessive precision because it can only ever do (say) 2048 steps along a 
given axis.  There are precision issues in making sure the view volumes align 
correctly, but they're not as extreme.

I don't know if glxproxy does this yet.  It arguably should for performance 
reasons, because the border of each view volume is a clip plane, so when 
you're distributing the rasterisation you can cull huge portions of your 
scene from each brick.

I don't know that the rasteriser is the right place to solve this problem.  
Eventually X will outgrow the 15-bit coordinate limit, and/or Roland will 
want to do 1440dpi on A4 or whatever.  So it's not about asking for 12 bits 
of fragment precision as opposed to 11, it's about asking for infinite 
precision.

- ajax


pgp7RmuUrcEbj.pgp
Description: PGP signature


Re: [offtopic] Re: Proprosed break in libGL / DRI driver ABI

2005-04-07 Thread Allen Akin
On Thu, Apr 07, 2005 at 10:40:55AM -0400, Adam Jackson wrote:
| This would suggest to me one or more of the following:
| 
| - xprint should be investigating tiled rendering solutions, possibly based on
|   the glxproxy code from Xdmx
| - GL is an architecturally poor match for printing

In OpenGL the concept of a framebuffer is fundamental, so if you believe
a printing process can't afford a framebuffer, then OpenGL is definitely
a poor match for that process.

On the flip side, it seems to me that people want features for printing
that are part of the OpenGL rendering model and that are most easily
implemented with a framebuffer. :-)

Tiling involves some subtle tradeoffs.  The OpenGL spec requires that
the pixel fragments generated by rasterization be unaffected if
primitives are translated by an integral amount in screen coordinates,
or if scissoring is enabled.  Those two requirements are intended to
make it possible to tile rendering without generating visible seams at
the tile boundaries.  But the same requirements would have to be met by
a higher-level utility if you want to do tiling there, and it may not be
possible to meet them if the underlying OpenGL renderer doesn't offer
enough coordinate and color (etc.) precision to the higher-level
utility.

So at first glance, it appears to me that the only place to solve the
problem is in the OpenGL implementation.  I wouldn't expect it to be
easy, though.

Allen


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[offtopic] Re: Proprosed break in libGL / DRI driver ABI

2005-04-07 Thread Adam Jackson
On Wednesday 06 April 2005 19:35, Roland Mainz wrote:
> Brian Paul wrote:
> > As is, you can't exceed 4K x 4K resolution without increasing
> > MAX_WIDTH/HEIGHT.  Your glViewport call will be clamped to those
> > limits if you specify larger.
>
> Which leads to some unfortunate problems for printing, think about a
> ISO-A4 page at 600DPI where a GL window covering the whole paper surface
> is beyond the current 4Kx4K limit (that's the reason why the print demo
> in "glxgears" simply shrinks width/height obtained from
> |XpGetPageDimensions()| (AFAIK to 1/2 or 1/3) and then centers the
> window on the page before starting to print).

This would suggest to me one or more of the following:

- xprint should be investigating tiled rendering solutions, possibly based on
  the glxproxy code from Xdmx
- GL is an architecturally poor match for printing

- ajax


pgpdIXY51juak.pgp
Description: PGP signature


Re: Proprosed break in libGL / DRI driver ABI

2005-04-07 Thread Julien Lafon
On Apr 7, 2005 2:23 AM, Roland Mainz <[EMAIL PROTECTED]> wrote:
> Brian Paul wrote:
> [snip]
> > > What about making MAX_WIDTH and MAX_HEIGHT runtime-configurable - would
> > > that be possible (for stack allocations the Mesa code then has to depend
> > > on |alloca()|) ?
> >
> > Probably do-able, but a lot of work.
> 
> Depends... if |alloca()| can safely be used on all platforms supported
> by Mesa then this should be no problem to implement. Alternatively the
> code could simply assume that the C compiler supports the C++ feature
> (BTW: Is this supported in C99, too ?) that an array can be dynamically
> sized at declaration (however that's less portable).
We already investigated this option but abandoned the idea after
realising that common data types such as struct span_arrays and all
its consumers must be changed. Without using C++  features it may be
too much hassle and is the reason why a bump of the MAX_WIDTH/HEIGHT
values is more feasible here.

Julien
-- 
Julien Lafon
Senior Staff Engineer, Hitachi


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-07 Thread Julien Lafon
On Apr 7, 2005 1:30 AM, Brian Paul <[EMAIL PROTECTED]> wrote:
> Feel free to increase MAX_WIDTH/HEIGHT in your copy of Mesa and try
> running various apps.
We have increased MAX_WIDTH/HEIGHT in our internal builds but this
limit will still hit the average Linux user using both the Postscript
or software renderer.

Julien
-- 
Julien Lafon
Senior Staff Engineer, Hitachi


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-07 Thread Alan Coopersmith
Roland Mainz wrote:
Depends... if |alloca()| can safely be used on all platforms supported
by Mesa then this should be no problem to implement. 
I don't think the Solaris implementation of alloca counts as "safe"
unfortunately, due to this warning in the man page:
   If the allocated block is beyond the current stack limit,
   the resulting behavior is undefined.
It certainly scares me away from most uses.
--
-Alan Coopersmith-   [EMAIL PROTECTED]
 Sun Microsystems, Inc. - X Window System Engineering
---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-07 Thread Roland Mainz
Brian Paul wrote:
[snip]
> > What about making MAX_WIDTH and MAX_HEIGHT runtime-configurable - would
> > that be possible (for stack allocations the Mesa code then has to depend
> > on |alloca()|) ?
> 
> Probably do-able, but a lot of work.

Depends... if |alloca()| can safely be used on all platforms supported
by Mesa then this should be no problem to implement. Alternatively the
code could simply assume that the C compiler supports the C++ feature
(BTW: Is this supported in C99, too ?) that an array can be dynamically
sized at declaration (however that's less portable).



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) [EMAIL PROTECTED]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-06 Thread Brian Paul
Roland Mainz wrote:
Brian Paul wrote:
[snip]
What about making MAX_WIDTH and MAX_HEIGHT runtime-configurable - would
that be possible (for stack allocations the Mesa code then has to depend
on |alloca()|) ?
Probably do-able, but a lot of work.

Depends... if |alloca()| can safely be used on all platforms supported
by Mesa then this should be no problem to implement. Alternatively the
code could simply assume that the C compiler supports the C++ feature
(BTW: Is this supported in C99, too ?) that an array can be dynamically
sized at declaration (however that's less portable).
I don't want to create a dependency on C99's variable length arrays.
I'm also leary of alloca() since it's not in POSIX.
"grep MAX_WIDTH src/mesa/*/*.[ch] | wc" shows there's about 160 
occurances of MAX_WIDTH that would have to be changed for dynamic 
allocation.  A lot of work.

-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-06 Thread Roland Mainz
Brian Paul wrote:
> >>>Ian Romanick wrote:
> >>>When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on
> >>>my wishlist: Would it be possible to increase |MAX_WIDTH| and
> >>>|MAX_HEIGHT| (and the matching texture limits of the software
> >>>rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint
> >>>come in mind) ?
> >>
> >>If you increase MAX_WIDTH/HEIGHT too far, you'll start to see
> >>interpolation errors in triangle rasterization (the software
> >>routines).  The full explanation is long, but basically there needs to
> >>be enough fractional bits in the GLfixed datatype to accomodate
> >>interpolation across the full viewport width/height.
> >
> > Will increasing MAX_WIDTH/HEIGHT affect applications which run in
> > small windows or only those which use resolutions exceeding the 4Kx4K
> > limit?
> 
> Increasing MAX_WIDTH/HEIGHT will result in more memory usage
> regardless of window size.

This is just memory which comes from the stack (excluding the |MALLOC()|
xc/extras/Mesa/src/mesa/drivers/x11/xm_api.c) , right ?

> As is, you can't exceed 4K x 4K resolution without increasing
> MAX_WIDTH/HEIGHT.  Your glViewport call will be clamped to those
> limits if you specify larger.

Which leads to some unfortunate problems for printing, think about a
ISO-A4 page at 600DPI where a GL window covering the whole paper surface
is beyond the current 4Kx4K limit (that's the reason why the print demo
in "glxgears" simply shrinks width/height obtained from
|XpGetPageDimensions()| (AFAIK to 1/2 or 1/3) and then centers the
window on the page before starting to print).

> >>In fact, I'm not sure that we've already gone too far by setting
> >>MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional
> >>bits.  I haven't heard any reports of bad triangles so far though.
> >
> > Do you know any specific application which may expose bad rendering
> > when the size gets to large?
> 
> No (there's far too many OpenGL apps out there for me to say).
> 
> >>But there probably aren't too many people generating 4Kx4K images.
> >
> > We've been running tests with the glutdemo applications and Xprint at
> > higher resolutions (6Kx8K window size) and did not notice any bad
> > rendering using the software rasterizer.
> 
> How large are your triangles?

See above... usually 8Kx8K is enought for DIN-A4/600DPI, for larger
paper sizes you either go down with the surface resolution or you have
to deal with larger coordinates (which leads directly to this
MAX_WIDTH/MAX_HEIGHT debate) ...



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) [EMAIL PROTECTED]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-06 Thread Roland Mainz
Brian Paul wrote:
> >>>On Apr 5, 2005 10:11 PM, Brian Paul <[EMAIL PROTECTED]> wrote:
> >>>Will increasing MAX_WIDTH/HEIGHT affect applications which run in
> >>>small windows or only those which use resolutions exceeding the 4Kx4K
> >>>limit?
> >>
> >>Increasing MAX_WIDTH/HEIGHT will result in more memory usage
> >>regardless of window size.
> >
> > Do you know how much memory is additionally allocated? If it is less
> > than 1MB then it may not be worth to worry about...
> 
> If you do a grep in the sources for MAX_WIDTH you'll see that it's
> used in lots of places.  Some are for stack allocations, others are
> for heap allocations.  It would take some effort to determine exactly
> how much more memory would be used.  I know of at least one structure
> that contains arrays dimensioned according to MAX_WIDTH that's
> currently just under 1MB.  That's probably the largest.

What about making MAX_WIDTH and MAX_HEIGHT runtime-configurable - would
that be possible (for stack allocations the Mesa code then has to depend
on |alloca()|) ?

> >>As is, you can't exceed 4K x 4K resolution without increasing
> >>MAX_WIDTH/HEIGHT.  Your glViewport call will be clamped to those
> >>limits if you specify larger.
> >
> > Let me reformulate my question: Will increasing MAX_WIDTH/HEIGHT break
> > any existing application at normal video screen sizes?
> 
> Probably not, but I'm not 100% sure.

What about making an experiment and bump the value to 8Kx8K and check if
we see anything which breaks in the following months ?



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) [EMAIL PROTECTED]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-06 Thread Brian Paul
Roland Mainz wrote:
Brian Paul wrote:
On Apr 5, 2005 10:11 PM, Brian Paul <[EMAIL PROTECTED]> wrote:
Will increasing MAX_WIDTH/HEIGHT affect applications which run in
small windows or only those which use resolutions exceeding the 4Kx4K
limit?
Increasing MAX_WIDTH/HEIGHT will result in more memory usage
regardless of window size.
Do you know how much memory is additionally allocated? If it is less
than 1MB then it may not be worth to worry about...
If you do a grep in the sources for MAX_WIDTH you'll see that it's
used in lots of places.  Some are for stack allocations, others are
for heap allocations.  It would take some effort to determine exactly
how much more memory would be used.  I know of at least one structure
that contains arrays dimensioned according to MAX_WIDTH that's
currently just under 1MB.  That's probably the largest.

What about making MAX_WIDTH and MAX_HEIGHT runtime-configurable - would
that be possible (for stack allocations the Mesa code then has to depend
on |alloca()|) ?
Probably do-able, but a lot of work.

As is, you can't exceed 4K x 4K resolution without increasing
MAX_WIDTH/HEIGHT.  Your glViewport call will be clamped to those
limits if you specify larger.
Let me reformulate my question: Will increasing MAX_WIDTH/HEIGHT break
any existing application at normal video screen sizes?
Probably not, but I'm not 100% sure.

What about making an experiment and bump the value to 8Kx8K and check if
we see anything which breaks in the following months ?
Ordinary applications would probably be fine (I think), but I'm fairly 
confident that if someone created an 8Kx8K framebuffer and draw large 
triangles things would not work properly.

Feel free to increase MAX_WIDTH/HEIGHT in your copy of Mesa and try 
running various apps.

-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Proprosed break in libGL / DRI driver ABI, take 2

2005-04-06 Thread Ian Romanick
[I'm just posting this to dri-devel.  We can summarize the results to 
xorg-arch later.]

After much discussion in the original thread, it looks like people are 
willing, even eager, to make very significant changes to the loader / 
driver interface.  Good stuff!  If we're going to do this, we should 
figure out what we want the primary "form" of the interface to be.

Looking at the existing loaders (i.e., libGL and libglx), there are 
three primary forms the interface could take.  I strongly believe that 
we should be explicit and up-front about defining this.  Part of the 
problem with the existing interface, especially for IHVs, is that it is 
poorly documented.  The three forms currently in use are:

1. Lots of statically exported functions.  Older versions of libGL just 
exported a bunch of functions (e.g., XF86DRI*, _glapi_*, etc.) that 
drivers would call.

  Pros: Very simple interface.  People are used to just calling functions.
  Cons: Very difficult to expand the interface in a binary compatible way.
2. A few statically exported functions and a glXGetProcAddress-like 
function.  This is the way more recent versions of libGL work.

  Pros: Much easier to add new and remove old interface functions.
  Cons: Juggling a large, ever increasing, number of function pointers 
in a pain.  Requires a lot of tedious, up-front work for the driver.

3. Loader passes a method table into the driver.  libglx, to a certain 
extent, takes this approach through the use of __GLimports 
(include/GL/internal/glcore.h).

  Pros: If the structure is tagged with a version, it is very easy to 
add new interface functions.  Much easier for drivers to deal with a 
single, externally supplied structure of function pointers.

  Cons: Can't remove functions from the interface.  We have this same 
problem in DRI now (e.g., bindContext and bindContext2 in __DRIcontext). 
 Structure versioning can have its own problems within the driver 
(e.g., lots of 'if ( api->version >= FOO ) { ... } else { ... }' blocks).

My personal opinion, and I think a couple people have voiced similar 
sentiments, is that option 1 is right out.  As far as option 2 and 
option 3 go, I'm a bit torn.  We already use option 2 to import 
functions into the driver, and we use option 3 to export functions from 
the driver.  I may give a /slight/ nod to option 3 because, by virtue of 
collecting all the entry-points in one place, it makes it much easier to 
generate unified developer documentation (all hail doxygen!).

Thoughts?

---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-06 Thread Brian Paul
Julien Lafon wrote:
On Apr 6, 2005 3:37 PM, Brian Paul <[EMAIL PROTECTED]> wrote:
Julien Lafon wrote:
On Apr 5, 2005 10:11 PM, Brian Paul <[EMAIL PROTECTED]> wrote:
Will increasing MAX_WIDTH/HEIGHT affect applications which run in
small windows or only those which use resolutions exceeding the 4Kx4K
limit?
Increasing MAX_WIDTH/HEIGHT will result in more memory usage
regardless of window size.
Do you know how much memory is additionally allocated? If it is less
than 1MB then it may not be worth to worry about...
If you do a grep in the sources for MAX_WIDTH you'll see that it's 
used in lots of places.  Some are for stack allocations, others are 
for heap allocations.  It would take some effort to determine exactly 
how much more memory would be used.  I know of at least one structure 
that contains arrays dimensioned according to MAX_WIDTH that's 
currently just under 1MB.  That's probably the largest.


As is, you can't exceed 4K x 4K resolution without increasing
MAX_WIDTH/HEIGHT.  Your glViewport call will be clamped to those
limits if you specify larger.
Let me reformulate my question: Will increasing MAX_WIDTH/HEIGHT break
any existing application at normal video screen sizes?
Probably not, but I'm not 100% sure.

But there probably aren't too many people generating 4Kx4K images.
We've been running tests with the glutdemo applications and Xprint at
higher resolutions (6Kx8K window size) and did not notice any bad
rendering using the software rasterizer.
How large are your triangles?
I thought we have tested all combinations ranging from very small
triangles up to full window size.
Try a sliver triangle that goes from the lower-left corner of the 
viewport to the upper-right.  Verify that the two long edges exactly 
meet at the upper-right.


The interpolation error will accumulate and be most noticable with
very large triangles.
Can you point me to one of the glutdemo applications which may likely fail?
I can't.  It would be easier to write a new program that stressed the 
scenario that's likely to fail.

-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-06 Thread Julien Lafon
On Apr 6, 2005 3:37 PM, Brian Paul <[EMAIL PROTECTED]> wrote:
> Julien Lafon wrote:
> > On Apr 5, 2005 10:11 PM, Brian Paul <[EMAIL PROTECTED]> wrote:
> > Will increasing MAX_WIDTH/HEIGHT affect applications which run in
> > small windows or only those which use resolutions exceeding the 4Kx4K
> > limit?
> 
> Increasing MAX_WIDTH/HEIGHT will result in more memory usage
> regardless of window size.
Do you know how much memory is additionally allocated? If it is less
than 1MB then it may not be worth to worry about...

> 
> As is, you can't exceed 4K x 4K resolution without increasing
> MAX_WIDTH/HEIGHT.  Your glViewport call will be clamped to those
> limits if you specify larger.
Let me reformulate my question: Will increasing MAX_WIDTH/HEIGHT break
any existing application at normal video screen sizes?

> 
> >>But there probably aren't too many people generating 4Kx4K images.
> >
> > We've been running tests with the glutdemo applications and Xprint at
> > higher resolutions (6Kx8K window size) and did not notice any bad
> > rendering using the software rasterizer.
> 
> How large are your triangles?
I thought we have tested all combinations ranging from very small
triangles up to full window size.

> 
> The interpolation error will accumulate and be most noticable with
> very large triangles.
Can you point me to one of the glutdemo applications which may likely fail?

Julien
-- 
Julien Lafon
Senior Staff Engineer, Hitachi


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-06 Thread Julien Lafon
On Apr 5, 2005 10:11 PM, Brian Paul <[EMAIL PROTECTED]> wrote:
> Roland Mainz wrote:
> > Ian Romanick wrote:
> > When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on
> > my wishlist: Would it be possible to increase |MAX_WIDTH| and
> > |MAX_HEIGHT| (and the matching texture limits of the software
> > rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint
> > come in mind) ?
> 
> If you increase MAX_WIDTH/HEIGHT too far, you'll start to see
> interpolation errors in triangle rasterization (the software
> routines).  The full explanation is long, but basically there needs to
> be enough fractional bits in the GLfixed datatype to accomodate
> interpolation across the full viewport width/height.
Will increasing MAX_WIDTH/HEIGHT affect applications which run in
small windows or only those which use resolutions exceeding the 4Kx4K
limit?

> 
> In fact, I'm not sure that we've already gone too far by setting
> MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional
> bits.  I haven't heard any reports of bad triangles so far though.
Do you know any specific application which may expose bad rendering
when the size gets to large?

> But there probably aren't too many people generating 4Kx4K images.
We've been running tests with the glutdemo applications and Xprint at
higher resolutions (6Kx8K window size) and did not notice any bad
rendering using the software rasterizer.

Julien
-- 
Julien Lafon
Senior Staff Engineer, Hitachi


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-06 Thread Brian Paul
Julien Lafon wrote:
On Apr 5, 2005 10:11 PM, Brian Paul <[EMAIL PROTECTED]> wrote:
Roland Mainz wrote:
Ian Romanick wrote:
When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on
my wishlist: Would it be possible to increase |MAX_WIDTH| and
|MAX_HEIGHT| (and the matching texture limits of the software
rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint
come in mind) ?
If you increase MAX_WIDTH/HEIGHT too far, you'll start to see
interpolation errors in triangle rasterization (the software
routines).  The full explanation is long, but basically there needs to
be enough fractional bits in the GLfixed datatype to accomodate
interpolation across the full viewport width/height.
Will increasing MAX_WIDTH/HEIGHT affect applications which run in
small windows or only those which use resolutions exceeding the 4Kx4K
limit?
Increasing MAX_WIDTH/HEIGHT will result in more memory usage 
regardless of window size.

As is, you can't exceed 4K x 4K resolution without increasing 
MAX_WIDTH/HEIGHT.  Your glViewport call will be clamped to those 
limits if you specify larger.


In fact, I'm not sure that we've already gone too far by setting
MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional
bits.  I haven't heard any reports of bad triangles so far though.
Do you know any specific application which may expose bad rendering
when the size gets to large?
No (there's far too many OpenGL apps out there for me to say).

But there probably aren't too many people generating 4Kx4K images.
We've been running tests with the glutdemo applications and Xprint at
higher resolutions (6Kx8K window size) and did not notice any bad
rendering using the software rasterizer.
How large are your triangles?
The interpolation error will accumulate and be most noticable with 
very large triangles.

-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-06 Thread Brian Paul
Ian Romanick wrote:
Brian Paul wrote:
Adam Jackson wrote:
Yeah, I just threw out glXGetProcAddress as a suggestion.  It's 
probably better to pass this table into the driver through the create 
context method.

[snip]
Right.  glXGetProcAddress() should not be used by libGL or the drivers 
to get internal function pointers.  There should be a new function for 
that, if we're breaking the ABI.

Not that I necessarily disagree, but what is your reasoning?
I think it's poor design to overload a public API function with extra 
functionality like that.

I realize we didn't have much choice originally.
-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-06 Thread Ian Romanick
Brian Paul wrote:
Adam Jackson wrote:
Yeah, I just threw out glXGetProcAddress as a suggestion.  It's 
probably better to pass this table into the driver through the create 
context method.
[snip]
Right.  glXGetProcAddress() should not be used by libGL or the drivers 
to get internal function pointers.  There should be a new function for 
that, if we're breaking the ABI.
Not that I necessarily disagree, but what is your reasoning?

---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Brian Paul
Adam Jackson wrote:
On Tuesday 05 April 2005 16:11, Brian Paul wrote:
Roland Mainz wrote:
Another item would be to look into what's required to support visuals
beyond 24bit RGB (like 30bit TrueColor visuals) ... someone on IRC
(AFAIK ajax (if I don't mix-up the nicks again :)) said that this may
require an ABI change, too...
I doubt an ABI change would be needed for that.

Are you sure about this?
Yup, pretty sure.  An ABI change at the libGL / driver interface isn't 
needed.  I don't know of any place in that interface where 8-bit color 
is an issue.  Please let me know if I'm wrong.


I thought we treated channels as bytes everywhere, unless GLchan was defined 
to something bigger, and even then only for OSMesa.  Even if it's not an ABI 
change, I suspect that growing GLchan beyond 8 bits while still preserving 
performance is non-trivial.
This is separate from Ian's ABI discussion.  It's true that core Mesa 
has to be recompiled to support 8, 16 or 32-bit color channels. 
That's something I'd like to change in the future.  It will be a lot 
of work but it can be done.

Currently, there aren't any hardware drivers that support > 8-bit 
color channels.  If we did want to support deeper channels in a 
hardware driver we'd have a lot of work to do in any case.  One 
approach would be to compile core Mesa for 16-bit channels, then 
shift/drop bits in the driver whenever we write to a color buffer.  Of 
course, there's more to it than that, but it would be feasible.

As part of the GL_ARB_framebuffer_object work I'm doing, simultaneous 
support for various channel sizes will be more do-able.


When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on
my wishlist: Would it be possible to increase |MAX_WIDTH| and
|MAX_HEIGHT| (and the matching texture limits of the software
rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint
come in mind) ?
If you increase MAX_WIDTH/HEIGHT too far, you'll start to see
interpolation errors in triangle rasterization (the software
routines).  The full explanation is long, but basically there needs to
be enough fractional bits in the GLfixed datatype to accomodate
interpolation across the full viewport width/height.
In fact, I'm not sure that we've already gone too far by setting
MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional
bits.  I haven't heard any reports of bad triangles so far though.
But there probably aren't too many people generating 4Kx4K images.

Yet.  Big images are becoming a reality.  DMX+glxproxy brings this real close 
to home.
I fully agree that there's need to render larger images.

Before increasing MAX_WIDTH/HEIGHT, someone should do an analysis of
the interpolation issues to see what side-effects might pop up.

Definitely.

Finally, Mesa has a number of scratch arrays that get dimensioned to
[MAX_WIDTH].  Some of those arrays/structs are rather large already.

I looked into allocating these dynamically, but there were one or two sticky 
points (mostly related to making scope act the same) so I dropped it.  It 
could be done though.
A lot of these allocations are on the stack.  Changing them to heap 
allocations might cause some loss of performance too.

-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Adam Jackson
On Tuesday 05 April 2005 16:11, Brian Paul wrote:
> Roland Mainz wrote:
> > Another item would be to look into what's required to support visuals
> > beyond 24bit RGB (like 30bit TrueColor visuals) ... someone on IRC
> > (AFAIK ajax (if I don't mix-up the nicks again :)) said that this may
> > require an ABI change, too...
>
> I doubt an ABI change would be needed for that.

Are you sure about this?

I thought we treated channels as bytes everywhere, unless GLchan was defined 
to something bigger, and even then only for OSMesa.  Even if it's not an ABI 
change, I suspect that growing GLchan beyond 8 bits while still preserving 
performance is non-trivial.

> > When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on
> > my wishlist: Would it be possible to increase |MAX_WIDTH| and
> > |MAX_HEIGHT| (and the matching texture limits of the software
> > rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint
> > come in mind) ?
>
> If you increase MAX_WIDTH/HEIGHT too far, you'll start to see
> interpolation errors in triangle rasterization (the software
> routines).  The full explanation is long, but basically there needs to
> be enough fractional bits in the GLfixed datatype to accomodate
> interpolation across the full viewport width/height.
>
> In fact, I'm not sure that we've already gone too far by setting
> MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional
> bits.  I haven't heard any reports of bad triangles so far though.
> But there probably aren't too many people generating 4Kx4K images.

Yet.  Big images are becoming a reality.  DMX+glxproxy brings this real close 
to home.

> Before increasing MAX_WIDTH/HEIGHT, someone should do an analysis of
> the interpolation issues to see what side-effects might pop up.

Definitely.

> Finally, Mesa has a number of scratch arrays that get dimensioned to
> [MAX_WIDTH].  Some of those arrays/structs are rather large already.

I looked into allocating these dynamically, but there were one or two sticky 
points (mostly related to making scope act the same) so I dropped it.  It 
could be done though.

- ajax


pgpl5o2nPadCR.pgp
Description: PGP signature


Re: Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Brian Paul
Adam Jackson wrote:
On Tuesday 05 April 2005 19:03, Ian Romanick wrote:
Adam Jackson wrote:
I have another one:  Hide all the functions that start with XF86DRI*, and
expose them to the driver through a function table or glXGetProcAddress
rather than by allowing the driver to call them directly.  This will
simplify the case where the X server is itself linked against libGL.
Kevin tells me these functions were never intended to be public API
anyway.
The only functions that are still used by DRI_NEW_INTERFACE_ONLY drivers
are XF86DRICreateDrawable, XF86DRIDestroyDrawable, and
XF86DRIDestroyContext.  It should be easy enough to eliminate those, but
something other than gLXGetProcAddress might be preferable.

Yeah, I just threw out glXGetProcAddress as a suggestion.  It's probably 
better to pass this table into the driver through the create context method.

We can't eliminate the functionality of these calls (I don't think), but they 
should not be visible API from the perspective of the GL client.
Right.  glXGetProcAddress() should not be used by libGL or the drivers 
to get internal function pointers.  There should be a new function for 
that, if we're breaking the ABI.

-Brian

---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Adam Jackson
On Tuesday 05 April 2005 19:03, Ian Romanick wrote:
> Adam Jackson wrote:
> > I have another one:  Hide all the functions that start with XF86DRI*, and
> > expose them to the driver through a function table or glXGetProcAddress
> > rather than by allowing the driver to call them directly.  This will
> > simplify the case where the X server is itself linked against libGL.
> >
> > Kevin tells me these functions were never intended to be public API
> > anyway.
>
> The only functions that are still used by DRI_NEW_INTERFACE_ONLY drivers
> are XF86DRICreateDrawable, XF86DRIDestroyDrawable, and
> XF86DRIDestroyContext.  It should be easy enough to eliminate those, but
> something other than gLXGetProcAddress might be preferable.

Yeah, I just threw out glXGetProcAddress as a suggestion.  It's probably 
better to pass this table into the driver through the create context method.

We can't eliminate the functionality of these calls (I don't think), but they 
should not be visible API from the perspective of the GL client.

- ajax


pgpTQftVhZUVG.pgp
Description: PGP signature


Re: Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Brian Paul
Nicolai Haehnle wrote:
On Tuesday 05 April 2005 22:11, Brian Paul wrote:
If you increase MAX_WIDTH/HEIGHT too far, you'll start to see 
interpolation errors in triangle rasterization (the software 
routines).  The full explanation is long, but basically there needs to 
be enough fractional bits in the GLfixed datatype to accomodate 
interpolation across the full viewport width/height.

In fact, I'm not sure that we've already gone too far by setting 
MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional 
bits.  I haven't heard any reports of bad triangles so far though. 
But there probably aren't too many people generating 4Kx4K images.

Before increasing MAX_WIDTH/HEIGHT, someone should do an analysis of 
the interpolation issues to see what side-effects might pop up.

Finally, Mesa has a number of scratch arrays that get dimensioned to 
[MAX_WIDTH].  Some of those arrays/structs are rather large already.

Slightly off-topic, but a thought that occured to me in this regard was to 
tile rendering. Basically, do a logical divide of the framebuffer into 
rectangles of, say, 64x64 pixels. During rasterization, all primitives are 
split according to those tiles and rendered separately. This has some 
advantages:

a) It could help reduce the interpolation issues you mentioned. It's 
obviously not a magic bullet, but it can avoid the need for insane 
precision in inner loops.
c) Better control of the size of scratch structures, possibly even better 
caching behaviour.
b) One could build a multi-threaded rasterizer (where work queues are per 
framebuffer tile), which is going to become all the more interesting once 
dualcore CPUs are widespread.
This would be FAR more work than simply addressing the interpolation 
issue.  There's lots of subtle conformance issues with the tiling 
approach you suggest.  Consider something simple like line stipples.

-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Roland Mainz
Ian Romanick wrote:
> 
> For X.org 6.9 / 7.0 I would like to break the existing libGL / DRI
> driver interface.  There is a *LOT* of crap hanging around in both libGL
> and in the DRI drivers that exists *only* to maintain backwards
> compatability with older versions of the interface.  Since it's crap, I
> would very much like to flush it.
> 
> I'd like to cut this stuff out for 7.0 for several main reasons:
> 
> - A major release is a logical time to make breaks like this.
> 
> - Bit rot.  Sure, we /assume/ libGL and the DRI drivers still actually
> work with older versions, but how often does it actually get tested?
> 
> - Code asthetics.  Because of the backwards compatability mechanisms
> that are in place, especially in libGL, to code can be a bit hard to
> follow.  Removing that code would, in a WAG estimate, eliminate at least
> a couple hundred lines of code.  It would also eliminate a number of
> '#ifdef DRI_NEW_INTERFACE_ONLY' blocks.
> 
> What I'm proposing goes a bit beyond '-DDRI_NEW_INTERFACE_ONLY=1", but
> that is a start.  In include/GL/internal/dri_interface.h (in the Mesa
> tree) there are number of methods that get converted to 'void *' if
> DRI_NEW_INTERFACE_ONLY is defined.  I propose that we completely remove
> them from the structures and rename some of the remaining methods.  For
> example, __DRIcontextRec::bindContext and __DRIcontextRec::bindContext2
> would be removed, and __DRIcontextRec::bindContext3 would be renamed to
> __DRIcontextRec::bindContext.
> 
> Additionally, there are a few libGL-private structures in
> src/glx/x11/glxclient.h that, due to binary compatability issues with
> older versions of the interface, can't be change.  Eliminating support
> for those older interfaces would allow some significant cleaning in
> those structures.  Basically, all of the stuff in glxclient.h with
> DEPRECATED in the name would be removed.  Other, less important, changes
> could also be made to __GLXcontextRec.

Another item would be to look into what's required to support visuals
beyond 24bit RGB (like 30bit TrueColor visuals) ... someone on IRC
(AFAIK ajax (if I don't mix-up the nicks again :)) said that this may
require an ABI change, too...
When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on
my wishlist: Would it be possible to increase |MAX_WIDTH| and
|MAX_HEIGHT| (and the matching texture limits of the software
rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint
come in mind) ?



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) [EMAIL PROTECTED]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Nicolai Haehnle
On Tuesday 05 April 2005 22:11, Brian Paul wrote:
> If you increase MAX_WIDTH/HEIGHT too far, you'll start to see 
> interpolation errors in triangle rasterization (the software 
> routines).  The full explanation is long, but basically there needs to 
> be enough fractional bits in the GLfixed datatype to accomodate 
> interpolation across the full viewport width/height.
> 
> In fact, I'm not sure that we've already gone too far by setting 
> MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional 
> bits.  I haven't heard any reports of bad triangles so far though. 
> But there probably aren't too many people generating 4Kx4K images.
> 
> Before increasing MAX_WIDTH/HEIGHT, someone should do an analysis of 
> the interpolation issues to see what side-effects might pop up.
> 
> Finally, Mesa has a number of scratch arrays that get dimensioned to 
> [MAX_WIDTH].  Some of those arrays/structs are rather large already.

Slightly off-topic, but a thought that occured to me in this regard was to 
tile rendering. Basically, do a logical divide of the framebuffer into 
rectangles of, say, 64x64 pixels. During rasterization, all primitives are 
split according to those tiles and rendered separately. This has some 
advantages:

a) It could help reduce the interpolation issues you mentioned. It's 
obviously not a magic bullet, but it can avoid the need for insane 
precision in inner loops.
c) Better control of the size of scratch structures, possibly even better 
caching behaviour.
b) One could build a multi-threaded rasterizer (where work queues are per 
framebuffer tile), which is going to become all the more interesting once 
dualcore CPUs are widespread.

cu,
Nicolai


pgpyy3jidOfu4.pgp
Description: PGP signature


Re: Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Ian Romanick
Adam Jackson wrote:
I have another one:  Hide all the functions that start with XF86DRI*, and 
expose them to the driver through a function table or glXGetProcAddress 
rather than by allowing the driver to call them directly.  This will simplify 
the case where the X server is itself linked against libGL.

Kevin tells me these functions were never intended to be public API anyway.
The only functions that are still used by DRI_NEW_INTERFACE_ONLY drivers 
are XF86DRICreateDrawable, XF86DRIDestroyDrawable, and 
XF86DRIDestroyContext.  It should be easy enough to eliminate those, but 
something other than gLXGetProcAddress might be preferable.


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Adam Jackson
On Tuesday 05 April 2005 14:06, Ian Romanick wrote:
> For X.org 6.9 / 7.0 I would like to break the existing libGL / DRI
> driver interface.  There is a *LOT* of crap hanging around in both libGL
> and in the DRI drivers that exists *only* to maintain backwards
> compatability with older versions of the interface.  Since it's crap, I
> would very much like to flush it.
>
> I'd like to cut this stuff out for 7.0 for several main reasons:
>
> - A major release is a logical time to make breaks like this.
>
> - Bit rot.  Sure, we /assume/ libGL and the DRI drivers still actually
> work with older versions, but how often does it actually get tested?
>
> - Code asthetics.  Because of the backwards compatability mechanisms
> that are in place, especially in libGL, to code can be a bit hard to
> follow.  Removing that code would, in a WAG estimate, eliminate at least
> a couple hundred lines of code.  It would also eliminate a number of
> '#ifdef DRI_NEW_INTERFACE_ONLY' blocks.
>
> What I'm proposing goes a bit beyond '-DDRI_NEW_INTERFACE_ONLY=1", but
> that is a start.  In include/GL/internal/dri_interface.h (in the Mesa
> tree) there are number of methods that get converted to 'void *' if
> DRI_NEW_INTERFACE_ONLY is defined.  I propose that we completely remove
> them from the structures and rename some of the remaining methods.  For
> example, __DRIcontextRec::bindContext and __DRIcontextRec::bindContext2
> would be removed, and __DRIcontextRec::bindContext3 would be renamed to
> __DRIcontextRec::bindContext.
>
> Additionally, there are a few libGL-private structures in
> src/glx/x11/glxclient.h that, due to binary compatability issues with
> older versions of the interface, can't be change.  Eliminating support
> for those older interfaces would allow some significant cleaning in
> those structures.  Basically, all of the stuff in glxclient.h with
> DEPRECATED in the name would be removed.  Other, less important, changes
> could also be made to __GLXcontextRec.

I have another one:  Hide all the functions that start with XF86DRI*, and 
expose them to the driver through a function table or glXGetProcAddress 
rather than by allowing the driver to call them directly.  This will simplify 
the case where the X server is itself linked against libGL.

Kevin tells me these functions were never intended to be public API anyway.

- ajax


pgpTUVhHUkbRR.pgp
Description: PGP signature


Re: Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Ian Romanick
Keith Whitwell wrote:
Ian Romanick wrote:
For X.org 6.9 / 7.0 I would like to break the existing libGL / DRI 
driver interface.  There is a *LOT* of crap hanging around in both 
libGL and in the DRI drivers that exists *only* to maintain backwards 
compatability with older versions of the interface.  Since it's crap, 
I would very much like to flush it.

I'd like to cut this stuff out for 7.0 for several main reasons:
- A major release is a logical time to make breaks like this.
- Bit rot.  Sure, we /assume/ libGL and the DRI drivers still actually 
work with older versions, but how often does it actually get tested?
In fact, we know that they don't work as backwards compatibility was 
broken in one of the recent 6.8.x releases, wasn't it?

Given that is the case we might be able to take advantage of that and 
bring forward some of those changes - the old versions don't work anyway 
so there's absolutely no point keeping the code around for them...
The 6.8.x break was on the server-side *only*.  I made some changes in 
libglx that slightly broke the interface with the DDX.  AFAIK, the 
client-side interfaces /should/ still work.  Like I said, though, I 
don't know that it has been tested...


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Brian Paul
Roland Mainz wrote:
Ian Romanick wrote:
For X.org 6.9 / 7.0 I would like to break the existing libGL / DRI
driver interface.  There is a *LOT* of crap hanging around in both libGL
and in the DRI drivers that exists *only* to maintain backwards
compatability with older versions of the interface.  Since it's crap, I
would very much like to flush it.
I'd like to cut this stuff out for 7.0 for several main reasons:
- A major release is a logical time to make breaks like this.
- Bit rot.  Sure, we /assume/ libGL and the DRI drivers still actually
work with older versions, but how often does it actually get tested?
- Code asthetics.  Because of the backwards compatability mechanisms
that are in place, especially in libGL, to code can be a bit hard to
follow.  Removing that code would, in a WAG estimate, eliminate at least
a couple hundred lines of code.  It would also eliminate a number of
'#ifdef DRI_NEW_INTERFACE_ONLY' blocks.
What I'm proposing goes a bit beyond '-DDRI_NEW_INTERFACE_ONLY=1", but
that is a start.  In include/GL/internal/dri_interface.h (in the Mesa
tree) there are number of methods that get converted to 'void *' if
DRI_NEW_INTERFACE_ONLY is defined.  I propose that we completely remove
them from the structures and rename some of the remaining methods.  For
example, __DRIcontextRec::bindContext and __DRIcontextRec::bindContext2
would be removed, and __DRIcontextRec::bindContext3 would be renamed to
__DRIcontextRec::bindContext.
Additionally, there are a few libGL-private structures in
src/glx/x11/glxclient.h that, due to binary compatability issues with
older versions of the interface, can't be change.  Eliminating support
for those older interfaces would allow some significant cleaning in
those structures.  Basically, all of the stuff in glxclient.h with
DEPRECATED in the name would be removed.  Other, less important, changes
could also be made to __GLXcontextRec.

Another item would be to look into what's required to support visuals
beyond 24bit RGB (like 30bit TrueColor visuals) ... someone on IRC
(AFAIK ajax (if I don't mix-up the nicks again :)) said that this may
require an ABI change, too...
I doubt an ABI change would be needed for that.

When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on
my wishlist: Would it be possible to increase |MAX_WIDTH| and
|MAX_HEIGHT| (and the matching texture limits of the software
rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint
come in mind) ?
If you increase MAX_WIDTH/HEIGHT too far, you'll start to see 
interpolation errors in triangle rasterization (the software 
routines).  The full explanation is long, but basically there needs to 
be enough fractional bits in the GLfixed datatype to accomodate 
interpolation across the full viewport width/height.

In fact, I'm not sure that we've already gone too far by setting 
MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional 
bits.  I haven't heard any reports of bad triangles so far though. 
But there probably aren't too many people generating 4Kx4K images.

Before increasing MAX_WIDTH/HEIGHT, someone should do an analysis of 
the interpolation issues to see what side-effects might pop up.

Finally, Mesa has a number of scratch arrays that get dimensioned to 
[MAX_WIDTH].  Some of those arrays/structs are rather large already.

-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Keith Whitwell
Ian Romanick wrote:
For X.org 6.9 / 7.0 I would like to break the existing libGL / DRI 
driver interface.  There is a *LOT* of crap hanging around in both libGL 
and in the DRI drivers that exists *only* to maintain backwards 
compatability with older versions of the interface.  Since it's crap, I 
would very much like to flush it.

I'd like to cut this stuff out for 7.0 for several main reasons:
- A major release is a logical time to make breaks like this.
- Bit rot.  Sure, we /assume/ libGL and the DRI drivers still actually 
work with older versions, but how often does it actually get tested?
In fact, we know that they don't work as backwards compatibility was 
broken in one of the recent 6.8.x releases, wasn't it?

Given that is the case we might be able to take advantage of that and 
bring forward some of those changes - the old versions don't work anyway 
so there's absolutely no point keeping the code around for them...

Keith
---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Proprosed break in libGL / DRI driver ABI

2005-04-05 Thread Ian Romanick
For X.org 6.9 / 7.0 I would like to break the existing libGL / DRI 
driver interface.  There is a *LOT* of crap hanging around in both libGL 
and in the DRI drivers that exists *only* to maintain backwards 
compatability with older versions of the interface.  Since it's crap, I 
would very much like to flush it.

I'd like to cut this stuff out for 7.0 for several main reasons:
- A major release is a logical time to make breaks like this.
- Bit rot.  Sure, we /assume/ libGL and the DRI drivers still actually 
work with older versions, but how often does it actually get tested?

- Code asthetics.  Because of the backwards compatability mechanisms 
that are in place, especially in libGL, to code can be a bit hard to 
follow.  Removing that code would, in a WAG estimate, eliminate at least 
a couple hundred lines of code.  It would also eliminate a number of 
'#ifdef DRI_NEW_INTERFACE_ONLY' blocks.

What I'm proposing goes a bit beyond '-DDRI_NEW_INTERFACE_ONLY=1", but 
that is a start.  In include/GL/internal/dri_interface.h (in the Mesa 
tree) there are number of methods that get converted to 'void *' if 
DRI_NEW_INTERFACE_ONLY is defined.  I propose that we completely remove 
them from the structures and rename some of the remaining methods.  For 
example, __DRIcontextRec::bindContext and __DRIcontextRec::bindContext2 
would be removed, and __DRIcontextRec::bindContext3 would be renamed to 
__DRIcontextRec::bindContext.

Additionally, there are a few libGL-private structures in 
src/glx/x11/glxclient.h that, due to binary compatability issues with 
older versions of the interface, can't be change.  Eliminating support 
for those older interfaces would allow some significant cleaning in 
those structures.  Basically, all of the stuff in glxclient.h with 
DEPRECATED in the name would be removed.  Other, less important, changes 
could also be made to __GLXcontextRec.


---
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel