Re: State of Linux graphics

2005-09-02 Thread mcartoaje
Jon Smirl has written,

> I've written an article that surveys the current State of Linux
> graphics and proposes a possible path forward. This is a long article
> containing a lot of detailed technical information as a guide to
> future developers. Skip over the detailed parts if they aren't
> relevant to your area of work.

My idea is a windowing program with support for vertical synchronization 
interrupts.

Hope this e-letter is threaded correctly.

mihai
--
a work-in-progress windowing program on svgalib at,
http://sourceforge.net/projects/svgalib-windows
(select browse cvs)


__
Switch to Netscape Internet Service.
As low as $9.95 a month -- Sign up today at http://isp.netscape.com/register

Netscape. Just the Net You Need.

New! Netscape Toolbar for Internet Explorer
Search from anywhere on the Web and block those annoying pop-ups.
Download now at http://channels.netscape.com/ns/search/install.jsp
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-02 Thread mcartoaje
Jon Smirl has written,

 I've written an article that surveys the current State of Linux
 graphics and proposes a possible path forward. This is a long article
 containing a lot of detailed technical information as a guide to
 future developers. Skip over the detailed parts if they aren't
 relevant to your area of work.

My idea is a windowing program with support for vertical synchronization 
interrupts.

Hope this e-letter is threaded correctly.

mihai
--
a work-in-progress windowing program on svgalib at,
http://sourceforge.net/projects/svgalib-windows
(select browse cvs)


__
Switch to Netscape Internet Service.
As low as $9.95 a month -- Sign up today at http://isp.netscape.com/register

Netscape. Just the Net You Need.

New! Netscape Toolbar for Internet Explorer
Search from anywhere on the Web and block those annoying pop-ups.
Download now at http://channels.netscape.com/ns/search/install.jsp
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread rep stsb
svgalib is spelled "svgalib"

I have started writing a windowing program which
uses svgalib. The source code is available at,

http://sourceforge.net/projects/svgalib-windows
 
select "browse cvs". SourceForge is rebuilding their
site, so some things don't work.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Sean
On Thu, September 1, 2005 4:38 pm, Jon Smirl said:

> We're not putting all of our eggs in one basket, you keep forgetting
> that we already have a server that supports all of the currently
> existing hardware. The question is where do we want to put our future
> eggs.

Amen!   All these arguments that we can't support an advanced future
design unless the new design also supports $10 third world video cards too
is a complete red herring.

Sean



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Jon Smirl
On 9/1/05, Jim Gettys <[EMAIL PROTECTED]> wrote:
> Not at all.
> 
> We're pursuing two courses of action right now, that are not mutually
> exclusive.
> 
> Jon Smirl's argument is that we can satisfy both needs simultaneously
> with a GL only strategy, and that doing two is counter productive,
> primarily on available resource grounds.
> 
> My point is that I don't think the case has (yet) been made to put all
> eggs into that one basket, and that some of the arguments presented for
> that course of action don't hold together.

We're not putting all of our eggs in one basket, you keep forgetting
that we already have a server that supports all of the currently
existing hardware. The question is where do we want to put our future
eggs.

-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Allen Akin
On Wed, Aug 31, 2005 at 08:59:23PM -0700, Keith Packard wrote:
| 
| Yeah, two systems, but (I hope) only one used for each card. So far, I'm
| not sure of the value of attempting to provide a mostly-software GL
| implementation in place of existing X drivers.

For the short term it's valuable for the apps that use OpenGL directly.
Games, of course, on platforms from cell-phone/PDA complexity up; also
things like avatar-based user interfaces.  On desktop platforms, plenty
of non-game OpenGL-based apps exist in the Windows world and I'd expect
those will migrate to Linux as the Linux desktop market grows enough to
be commercially viable.  R128-class hardware is fast enough to be useful
for many non-game apps.

For the long term, you have to decide how likely it is that demands for
new functionality on old platforms will arise.  Let's assume for the
moment that they do.  If OpenGL is available, we have the option to use
it.  If OpenGL isn't available, we have to go through another iteration
of the process we're in now, and grow Render (or some new extensions)
with consequent duplication of effort and/or competition for resources.

| I continue to work on devices for which 3D isn't going to happen.  My
| most recent window system runs on a machine with only 384K of memory...

I'm envious -- sounds like a great project.  But such systems aren't
representative of the vast majority of hardware for which we're building
Render and EXA implementations today.  (Nor are they representative of
the hardware on which most Gnome or KDE apps would run, I suspect.)  I
question how much influence should they have over our core graphics
strategy.

| Again, the question is whether a mostly-software OpenGL implementation
| can effectively compete against the simple X+Render graphics model for
| basic 2D application operations...

I think it's pretty clear that it can, since the few operations we want
to accelerate already fit within the OpenGL framework.  

(I just felt a bit of deja vu over this -- I heard eerily similar
arguments from Microsoft when the first versions of Direct3D were
created.)

|   ...and whether there are people interested
| in even trying to make this happen.

In the commercial world people believe such a thing is valuable, and
it's already happened.  (See, for example,
http://www.hybrid.fi/main/esframework/tools.php).

Why hasn't it happened in the Open Source world?  Well, I'd argue it's
largely because we chose to put our limited resources behind projects
inside the X server instead.

| > The point of OpenGL is to expose what the vast majority of current
| > display hardware does well, and not a lot more.  So if a class of apps
| > isn't "happy" with the functionality that OpenGL provides, it won't be
| > happy with the functionality that any other low-level API provides.  The
| > problem lies with the hardware.
| 
| Not currently; the OpenGL we have today doesn't provide for
| component-level compositing or off-screen drawable objects. The former
| is possible in much modern hardware, and may be exposed in GL through
| pixel shaders, while the latter spent far too long mired in the ARB and
| is only now on the radar for implementation in our environment.

Component-level compositing:  Current and past hardware doesn't support
it, so even if you create a new low-level API for it you won't get
acceleration.  You can, however, use a multipass algorithm (as Glitz
does) and get acceleration for it through OpenGL even on marginal old
hardware.  I'd guess that the latter is much more likely to satisfy app
developers than the former (and that's the point I was trying to make
above).

Off-screen drawable objects:  PBuffers are offscreen drawable objects
that have existed in OpenGL since 1995 (if I remember correctly).
Extensions exist to allow using them as textures, too.  We simply chose
to implement an entirely new mechanism for offscreen rendering rather
than putting our resources into implementing a spec that already
existed.

| So, my motivation for moving to GL drivers is far more about providing
| drivers for closed source hardware and reducing developer effort needed
| to support new hardware ...

I agree that these are extremely important.

|  ...than it is about making the desktop graphics
| faster or more fancy.

Some people do feel otherwise on that point. :-)

| The bulk of 2D applications need to paint solid rectangles, display a
| couple of images with a bit of scaling and draw some text.

Cairo does a lot more than that, so it would seem that we expect that
situation to change (for example, as SVG gains traction).

Aside:  [I know you know this, but I just want to call it out for any
reader who hasn't considered it before.]  You can almost never base a
design on just the most common operations; infrequent operations matter
too, if they're sufficiently expensive.  For example, in a given desktop
scene glyph drawing commands might outnumber 

Re: State of Linux graphics

2005-09-01 Thread Jim Gettys
Not at all.

We're pursuing two courses of action right now, that are not mutually
exclusive.

Jon Smirl's argument is that we can satisfy both needs simultaneously
with a GL only strategy, and that doing two is counter productive,
primarily on available resource grounds.

My point is that I don't think the case has (yet) been made to put all
eggs into that one basket, and that some of the arguments presented for
that course of action don't hold together.

- Jim

On Thu, 2005-09-01 at 16:39 +, Andreas Hauser wrote:
> jg wrote @ Thu, 01 Sep 2005 11:59:33 -0400:
> 
> > Legacy hardware and that being proposed/built for the developing world
> > is tougher; we have code in hand for existing chips, and the price point
> > is even well below cell phones on those devices. They don't have
> > anything beyond basic blit and, miracles of miracles, alpha blending.
> > These are built on one or two generation back fabs, again for cost.
> > And as there are no carriers subsidizing the hardware cost, the real
> > hardware cost has to be met, at very low price points.  They don't come
> > with the features Allen admires in the latest cell phone chips.
> 
> So you suggest, that we, that have capable cards, which can be had for
> < 50 Euro here, see that we find something better than X.org to run
> on them because X.org is concentrating on < 10 Euro chips?
> Somehow i always thought that older xfree86 trees were just fine for them.
> 
> Andy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Keith Whitwell

Ian Romanick wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brian Paul wrote:



It's other (non-orientation) texture state I had in mind:

- the texel format (OpenGL has over 30 possible texture formats).
- texture size and borders
- the filtering mode (linear, nearest, etc)
- coordinate wrap mode (clamp, repeat, etc)
- env/combine mode
- multi-texture state



Which is why it's such a good target for code generation.  You'd
generate the texel fetch routine, use that to generate the wraped texel
fetch routine, use that to generate the filtered texel fetch routine,
use that to generate the env/combine routines.

Once-upon-a-time I had the first part and some of the second part
written.  Doing just that little bit was slightly faster on a Pentium 3
and slightly slower on a Pentium 4.  I suspect the problem was that I
wasn't caching the generated code smart enough, so it was it trashing
the CPU cache.  The other problem is that, in the absence of an
assembler in Mesa, it was really painful to change the code stubs.


Note that the last part is now partially addressed at least - Mesa has 
an integrated and simple runtime assembler for x86 and sse.  There are 
some missing pieces and rough edges, but it's working and useful as it 
stands.


Keith
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brian Paul wrote:

> It's other (non-orientation) texture state I had in mind:
> 
> - the texel format (OpenGL has over 30 possible texture formats).
> - texture size and borders
> - the filtering mode (linear, nearest, etc)
> - coordinate wrap mode (clamp, repeat, etc)
> - env/combine mode
> - multi-texture state

Which is why it's such a good target for code generation.  You'd
generate the texel fetch routine, use that to generate the wraped texel
fetch routine, use that to generate the filtered texel fetch routine,
use that to generate the env/combine routines.

Once-upon-a-time I had the first part and some of the second part
written.  Doing just that little bit was slightly faster on a Pentium 3
and slightly slower on a Pentium 4.  I suspect the problem was that I
wasn't caching the generated code smart enough, so it was it trashing
the CPU cache.  The other problem is that, in the absence of an
assembler in Mesa, it was really painful to change the code stubs.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFDFziUX1gOwKyEAw8RAhmFAJ9QJ7RTrB2dHV/hwb8ktwLyqKSM4wCdGtbS
b0A2N2jFcLeg8HRm53jMyrI=
=Ygkd
-END PGP SIGNATURE-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Andreas Hauser
jg wrote @ Thu, 01 Sep 2005 11:59:33 -0400:

> Legacy hardware and that being proposed/built for the developing world
> is tougher; we have code in hand for existing chips, and the price point
> is even well below cell phones on those devices. They don't have
> anything beyond basic blit and, miracles of miracles, alpha blending.
> These are built on one or two generation back fabs, again for cost.
> And as there are no carriers subsidizing the hardware cost, the real
> hardware cost has to be met, at very low price points.  They don't come
> with the features Allen admires in the latest cell phone chips.

So you suggest, that we, that have capable cards, which can be had for
< 50 Euro here, see that we find something better than X.org to run
on them because X.org is concentrating on < 10 Euro chips?
Somehow i always thought that older xfree86 trees were just fine for them.

Andy
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Brian Paul

Alan Cox wrote:

On Iau, 2005-09-01 at 09:24 -0600, Brian Paul wrote:

If the blending is for screen-aligned rects, glDrawPixels would be a 
far easier path to optimize than texturing.  The number of state 
combinations related to texturing is pretty overwhelming.



As doom showed however once you can cut down some of the combinations
particularly if you know the texture orientation is limited you can
really speed it up.

Blending is going to end up using textures onto flat surfaces facing the
viewer which are not rotated or skewed.


Hi Alan,

It's other (non-orientation) texture state I had in mind:

- the texel format (OpenGL has over 30 possible texture formats).
- texture size and borders
- the filtering mode (linear, nearest, etc)
- coordinate wrap mode (clamp, repeat, etc)
- env/combine mode
- multi-texture state

It basically means that the driver may have to do state checks similar 
to this to determine if it can use optimized code.  An excerpt from Mesa:


 if (ctx->Texture._EnabledCoordUnits == 0x1
 && !ctx->FragmentProgram._Active
 && ctx->Texture.Unit[0]._ReallyEnabled == TEXTURE_2D_BIT
 && texObj2D->WrapS == GL_REPEAT
 && texObj2D->WrapT == GL_REPEAT
 && texObj2D->_IsPowerOfTwo
 && texImg->Border == 0
 && texImg->Width == texImg->RowStride
 && (format == MESA_FORMAT_RGB || format == MESA_FORMAT_RGBA)
 && minFilter == magFilter
 && ctx->Light.Model.ColorControl == GL_SINGLE_COLOR
 && ctx->Texture.Unit[0].EnvMode != GL_COMBINE_EXT) {
if (ctx->Hint.PerspectiveCorrection==GL_FASTEST) {
   if (minFilter == GL_NEAREST
   && format == MESA_FORMAT_RGB
   && (envMode == GL_REPLACE || envMode == GL_DECAL)
   && ((swrast->_RasterMask == (DEPTH_BIT | TEXTURE_BIT)
&& ctx->Depth.Func == GL_LESS
&& ctx->Depth.Mask == GL_TRUE)
   || swrast->_RasterMask == TEXTURE_BIT)
   && ctx->Polygon.StippleFlag == GL_FALSE
   && ctx->Visual.depthBits <= 16) {
  if (swrast->_RasterMask == (DEPTH_BIT | TEXTURE_BIT)) {
 USE(simple_z_textured_triangle);
  }
  else {
 USE(simple_textured_triangle);
  }
   }
 [...]

That's pretty ugly.  Plus the rasterization code for textured 
triangles is fairly complicated.


But the other significant problem is the application has to be sure it 
has set all the GL state correctly so that the fast path is really 
used.  If it gets one thing wrong, you may be screwed.  If different 
drivers optimize slightly different paths, that's another problem.


glDrawPixels would be simpler for both the implementor and user.

-Brian
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Jim Gettys
On Thu, 2005-09-01 at 09:24 -0600, Brian Paul wrote:

> 
> If the blending is for screen-aligned rects, glDrawPixels would be a 
> far easier path to optimize than texturing.  The number of state 
> combinations related to texturing is pretty overwhelming.
> 
> 
> Anyway, I think we're all in agreement about the desirability of 
> having a single, unified driver in the future.
> 

Certainly for most hardware in the developed world I think we all agree
with this. The argument is about when we get to one driver model, not if
we get there, and not what the end state is.

In my view, the battle is on legacy systems and the very low end, not in
hardware we hackers use that might run Windows Vista or Mac OS X  

I've had the (friendly) argument with Allen Akin for 15 years that due
to reduction of hardware costs we can't presume OpenGL.  Someday, he'll
be right, and I'll be wrong.  I'm betting I'll be right for a few more
years, and I nothing would tickle me pink more to lose the argument
soon...

Legacy hardware and that being proposed/built for the developing world
is tougher; we have code in hand for existing chips, and the price point
is even well below cell phones on those devices. They don't have
anything beyond basic blit and, miracles of miracles, alpha blending.
These are built on one or two generation back fabs, again for cost.
And as there are no carriers subsidizing the hardware cost, the real
hardware cost has to be met, at very low price points.  They don't come
with the features Allen admires in the latest cell phone chips.

I think the onus of proof that we can immediately completely ditch a
second driver framework in favor of everything being OpenGL, even a
software tuned one, is in my view on those who claim that is viable.
Waving one's hands and claiming there are 100 kbyte closed source
OpenGL/ES implementations doesn't cut it in my view, given where we are
today with the code we already have in hand.  So far, the case hasn't
been made.

Existence proof that we're wrong and can move *entirely* to OpenGL
sooner rather than later would be gratefully accepted..
Regards,
Jim


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Alan Cox
On Iau, 2005-09-01 at 09:24 -0600, Brian Paul wrote:
> If the blending is for screen-aligned rects, glDrawPixels would be a 
> far easier path to optimize than texturing.  The number of state 
> combinations related to texturing is pretty overwhelming.

As doom showed however once you can cut down some of the combinations
particularly if you know the texture orientation is limited you can
really speed it up.

Blending is going to end up using textures onto flat surfaces facing the
viewer which are not rotated or skewed.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Brian Paul

Just a few comments...

Keith Packard wrote:


Again, the question is whether a mostly-software OpenGL implementation
can effectively compete against the simple X+Render graphics model for
basic 2D application operations, and whether there are people interested
in even trying to make this happen.


I don't know of anyone who's writen a "2D-centric" Mesa driver, but 
it's feasible.  The basic idea would be to simply fast-path a handful 
of basic OpenGL paths that correspond to the basic X operations:


1. Solid rect fill: glScissor + glClear
2. Blit/copy: glCopyPixels
3. Monochrome glyphs: glBitmap
4. PutImage: glDrawPixels

Those OpenGL commands could be directly implemented with whatever 
mechanism is used in conventional X drivers.  I don't think the 
overhead of going through the OpenGL/Mesa API would be significant.


If Xgl used those commands and you didn't turn on fancy blending, etc. 
the performance should be fine.  If the hardware supported blending, 
that could be easily exposed too.  The rest of OpenGL would go through 
the usual software paths (slow, but better than nothing).


It might be an interesting project for someone.  After one driver was 
done subsequent ones should be fairly easy.




|... However, at the
| application level, GL is not a very friendly 2D application-level API.

The point of OpenGL is to expose what the vast majority of current
display hardware does well, and not a lot more.  So if a class of apps
isn't "happy" with the functionality that OpenGL provides, it won't be
happy with the functionality that any other low-level API provides.  The
problem lies with the hardware.



Not currently; the OpenGL we have today doesn't provide for
component-level compositing or off-screen drawable objects. The former
is possible in much modern hardware, and may be exposed in GL through
pixel shaders, while the latter spent far too long mired in the ARB and
is only now on the radar for implementation in our environment.

Off-screen drawing is the dominant application paradigm in the 2D world,
so we can't function without it while component-level compositing
provides superior text presentation on LCD screens, which is an
obviously increasing segment of the market.


Yeah, we really need to make some progress with off-screen rendering 
in our drivers (either Pbuffers or renderbuffers).  I've been working 
on renderbuffers but we really need that overdue memory manager.




Jon's right about this:  If you can accelerate a given simple function
(blending, say) for a 2D driver, you can accelerate that same function
in a Mesa driver for a comparable amount of effort, and deliver a
similar benefit to apps.  (More apps, in fact, since it helps
OpenGL-based apps as well as Cairo-based apps.)


Yes, you *can*, but the amount of code needed to perform simple
pixel-aligned upright blends is a tiny fraction of that needed to deal
with filtering textures and *then* blending. All of the compositing code
needed for the Render extension, including accelerated (MMX) is
implemented in 10K LOC. Optimizing a new case generally involves writing
about 50 lines of code or so.


If the blending is for screen-aligned rects, glDrawPixels would be a 
far easier path to optimize than texturing.  The number of state 
combinations related to texturing is pretty overwhelming.



Anyway, I think we're all in agreement about the desirability of 
having a single, unified driver in the future.


-Brian
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Antonio Vargas
On 9/1/05, Alan Cox <[EMAIL PROTECTED]> wrote:
> On Iau, 2005-09-01 at 08:00 +0200, Antonio Vargas wrote:
> > 2. whole screen z-buffer, for depth comparison between the pixels
> > generated from each window.
> 
> That one I question in part - if the rectangles are (as is typically the
> case) large then the Z buffer just ups the memory accesses. I guess for
> round windows it might be handy.
> 

There are multiple ways to enhance the speed for zbuffer:

1. Use an hierarchical z-buffer

Divide the screen in 16x16 pixel tiles, and then a per-tile minimum
value. When rendering a poly, you first check the tile-z against the
poly-z and if it fails you can skip 256 pixels in one go.

2. Use scanline-major rendering:

for_each_scanline{
  clear_z_for_scanline();
  for_each_polygon{
draw_pixels_for_current_polygon_and scanline();
  }
}

This is easily done by modeling the scanliner with a coroutine for each polygon
to be painted. The zbuffer is reduced to a scanline and is reused for
all scanlines,
so it's rather fast :)

-- 
Greetz, Antonio Vargas aka winden of network

http://wind.codepixel.com/

Las cosas no son lo que parecen, excepto cuando parecen lo que si son.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Alan Cox
On Iau, 2005-09-01 at 08:00 +0200, Antonio Vargas wrote:
> 2. whole screen z-buffer, for depth comparison between the pixels
> generated from each window.

That one I question in part - if the rectangles are (as is typically the
case) large then the Z buffer just ups the memory accesses. I guess for
round windows it might be handy.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Allen Akin
On Wed, Aug 31, 2005 at 08:11:12PM -0700, Ian Romanick wrote:
| Allen Akin wrote:
| > Jon's right about this:  If you can accelerate a given simple function
| > (blending, say) for a 2D driver, you can accelerate that same function
| > in a Mesa driver for a comparable amount of effort, and deliver a
| > similar benefit to apps.  (More apps, in fact, since it helps
| > OpenGL-based apps as well as Cairo-based apps.)
| 
| The difference is that there is a much larger number of state
| combinations possible in OpenGL than in something stripped down for
| "just 2D".  That can make it more difficult to know where to spend the
| time tuning.  ...

I'd try solving the problem by copying what Render+EXA does, because we
already have some evidence that's sufficient.  We know what situations
Render+EXA accelerates, so in Mesa we accelerate just the OpenGL state
vectors that correspond to those situations.  The state analysis code
could be written once and shared.  You know more about that part of Mesa
than I do; do you think writing and documenting the analysis code would
be significantly more time-consuming than what's already gone into
defining and documenting the corresponding components for EXA?  The rest
is device setup, and likely to be roughly equivalent for the two
interfaces.

| The real route forward is to dig deeper into run-time code generation.

In theory, I agree (and I think it would be a really fun project).  In
practice, I've always turned away from it when tempted, because the
progress of the hardware folks is so overwhelming.  Have you seen what's
possible on cell phones that have been shipping since the beginning of
2005?  Amazing.  On low-cost power-constrained devices, a market where
it was sometimes claimed that acceleration wouldn't be practical.

| BTW, Alan, when are you going to start writing code again? >:)

Yeah, certain other IBM people have been on my case, too.  They're
largely to blame for the fact that I'm in this discussion at all.  :-)

Allen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Antonio Vargas
On 9/1/05, Ian Romanick <[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Allen Akin wrote:
> > On Wed, Aug 31, 2005 at 02:06:54PM -0700, Keith Packard wrote:
> > |
> > | ...So far, 3D driver work has proceeded almost entirely on the
> > | newest documented hardware that people could get. Going back and
> > | spending months optimizing software 3D rendering code so that it works
> > | as fast as software 2D code seems like a thankless task.
> >
> > Jon's right about this:  If you can accelerate a given simple function
> > (blending, say) for a 2D driver, you can accelerate that same function
> > in a Mesa driver for a comparable amount of effort, and deliver a
> > similar benefit to apps.  (More apps, in fact, since it helps
> > OpenGL-based apps as well as Cairo-based apps.)
> 
> The difference is that there is a much larger number of state
> combinations possible in OpenGL than in something stripped down for
> "just 2D".  That can make it more difficult to know where to spend the
> time tuning.  I've spent a fair amount of time looking at Mesa's texture
> blending code, so I know this to be true.
> 
> The real route forward is to dig deeper into run-time code generation.
> There are a large number of possible combinations, but they all look
> pretty similar.  This is ideal for run-time code gen.  The problem is
> that writing correct, tuned assembly for this stuff takes a pretty
> experience developer, and writing correct, tuned code generation
> routines takes an even more experienced developer.  Experienced and more
> experienced developers are, alas, in short supply.

Ian, the easy way would be to concentrate on 2d-like operations and
optimize them by hand. I mean if _we_ are developing the opegl-using
application (xserver-over-opengl), we already know what opengl
operations and moder are needed, so we can concentrate on coding them
in software. And if this means that we have to detect the case when a
triangle is z-constant, then so be it.

Using an OSX desktop everyday and having experience on
software-graphics for small machines, and assuming OSX is drawing the
screen just by rendering each window to a offscreen-buffer and then
compositing, our needs are:

1. offscreen buffers, that can be drawn into. we don't really need
anything fancy, just be able to point the drawing operations to
another memory space. they should be any size, not just power-of-two.

2. whole screen z-buffer, for depth comparison between the pixels
generated from each window.

3. texture+alpha (RGBA) triangles, using any offscreen buffer as a
texture. texturing from a non-power-of-two texture is not that
difficult anymore since about '96 or '97.

4. alpha blending, where the incoming alpha is used as a blending
factor with this equation: scr_color_new = scr_color * (1-alpha) +
tex_color * alpha.

1+2+3 gives us a basic 3d-esque desktop. adding 4 provides the dropshadows ;)

But, 3d software rendering is easily speeded-up by not using z-buffer,
which is a PITA. Two aproaches for solving this:

a. Just sort the polys (they are just 2 polys per window) back to
front and draw at screen-buffer flip. This is easy. Previous work I
did sugests you can reach 16fps for a 320x192x8bit screen with a 10
mips machine ([EMAIL PROTECTED]).

b. Implement a scanline zbuffer, where we have to paint by scanlines
instead of whole triangles. Drawing is delayed until screen-buffer
flip and then we have an outer loop for each screen scanline, middle
loop for each poly that is affected and inner loop for each pixel from
that poly in that scanline.

Software rendering is just detecting the common case and coding a
proper code for it. It's not really that difficult to reach
memory-speed if you simply forget about implementing all combinations
of graphics modes.

> BTW, Alan, when are you going to start writing code again? >:)
> 
> > So long as people are encouraged by word and deed to spend their time on
> > "2D" drivers, Mesa drivers will be further starved for resources and the
> > belief that OpenGL has nothing to offer "2D" apps will become
> > self-fulfilling.
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.2.6 (GNU/Linux)
> 
> iD8DBQFDFnFQX1gOwKyEAw8RAgZsAJ9MoKf+JTX4OGrybrhD+i2axstONgCghwih
> /Bln/u55IJb3BMWBwVTA3sk=
> =k086
> -END PGP SIGNATURE-
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


-- 
Greetz, Antonio Vargas aka winden of network

http://wind.codepixel.com/

Las cosas no son lo que parecen, excepto cuando parecen lo que si son.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Antonio Vargas
On 9/1/05, Ian Romanick [EMAIL PROTECTED] wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Allen Akin wrote:
  On Wed, Aug 31, 2005 at 02:06:54PM -0700, Keith Packard wrote:
  |
  | ...So far, 3D driver work has proceeded almost entirely on the
  | newest documented hardware that people could get. Going back and
  | spending months optimizing software 3D rendering code so that it works
  | as fast as software 2D code seems like a thankless task.
 
  Jon's right about this:  If you can accelerate a given simple function
  (blending, say) for a 2D driver, you can accelerate that same function
  in a Mesa driver for a comparable amount of effort, and deliver a
  similar benefit to apps.  (More apps, in fact, since it helps
  OpenGL-based apps as well as Cairo-based apps.)
 
 The difference is that there is a much larger number of state
 combinations possible in OpenGL than in something stripped down for
 just 2D.  That can make it more difficult to know where to spend the
 time tuning.  I've spent a fair amount of time looking at Mesa's texture
 blending code, so I know this to be true.
 
 The real route forward is to dig deeper into run-time code generation.
 There are a large number of possible combinations, but they all look
 pretty similar.  This is ideal for run-time code gen.  The problem is
 that writing correct, tuned assembly for this stuff takes a pretty
 experience developer, and writing correct, tuned code generation
 routines takes an even more experienced developer.  Experienced and more
 experienced developers are, alas, in short supply.

Ian, the easy way would be to concentrate on 2d-like operations and
optimize them by hand. I mean if _we_ are developing the opegl-using
application (xserver-over-opengl), we already know what opengl
operations and moder are needed, so we can concentrate on coding them
in software. And if this means that we have to detect the case when a
triangle is z-constant, then so be it.

Using an OSX desktop everyday and having experience on
software-graphics for small machines, and assuming OSX is drawing the
screen just by rendering each window to a offscreen-buffer and then
compositing, our needs are:

1. offscreen buffers, that can be drawn into. we don't really need
anything fancy, just be able to point the drawing operations to
another memory space. they should be any size, not just power-of-two.

2. whole screen z-buffer, for depth comparison between the pixels
generated from each window.

3. texture+alpha (RGBA) triangles, using any offscreen buffer as a
texture. texturing from a non-power-of-two texture is not that
difficult anymore since about '96 or '97.

4. alpha blending, where the incoming alpha is used as a blending
factor with this equation: scr_color_new = scr_color * (1-alpha) +
tex_color * alpha.

1+2+3 gives us a basic 3d-esque desktop. adding 4 provides the dropshadows ;)

But, 3d software rendering is easily speeded-up by not using z-buffer,
which is a PITA. Two aproaches for solving this:

a. Just sort the polys (they are just 2 polys per window) back to
front and draw at screen-buffer flip. This is easy. Previous work I
did sugests you can reach 16fps for a 320x192x8bit screen with a 10
mips machine ([EMAIL PROTECTED]).

b. Implement a scanline zbuffer, where we have to paint by scanlines
instead of whole triangles. Drawing is delayed until screen-buffer
flip and then we have an outer loop for each screen scanline, middle
loop for each poly that is affected and inner loop for each pixel from
that poly in that scanline.

Software rendering is just detecting the common case and coding a
proper code for it. It's not really that difficult to reach
memory-speed if you simply forget about implementing all combinations
of graphics modes.

 BTW, Alan, when are you going to start writing code again? :)
 
  So long as people are encouraged by word and deed to spend their time on
  2D drivers, Mesa drivers will be further starved for resources and the
  belief that OpenGL has nothing to offer 2D apps will become
  self-fulfilling.
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.2.6 (GNU/Linux)
 
 iD8DBQFDFnFQX1gOwKyEAw8RAgZsAJ9MoKf+JTX4OGrybrhD+i2axstONgCghwih
 /Bln/u55IJb3BMWBwVTA3sk=
 =k086
 -END PGP SIGNATURE-
 -
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/
 


-- 
Greetz, Antonio Vargas aka winden of network

http://wind.codepixel.com/

Las cosas no son lo que parecen, excepto cuando parecen lo que si son.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Allen Akin
On Wed, Aug 31, 2005 at 08:11:12PM -0700, Ian Romanick wrote:
| Allen Akin wrote:
|  Jon's right about this:  If you can accelerate a given simple function
|  (blending, say) for a 2D driver, you can accelerate that same function
|  in a Mesa driver for a comparable amount of effort, and deliver a
|  similar benefit to apps.  (More apps, in fact, since it helps
|  OpenGL-based apps as well as Cairo-based apps.)
| 
| The difference is that there is a much larger number of state
| combinations possible in OpenGL than in something stripped down for
| just 2D.  That can make it more difficult to know where to spend the
| time tuning.  ...

I'd try solving the problem by copying what Render+EXA does, because we
already have some evidence that's sufficient.  We know what situations
Render+EXA accelerates, so in Mesa we accelerate just the OpenGL state
vectors that correspond to those situations.  The state analysis code
could be written once and shared.  You know more about that part of Mesa
than I do; do you think writing and documenting the analysis code would
be significantly more time-consuming than what's already gone into
defining and documenting the corresponding components for EXA?  The rest
is device setup, and likely to be roughly equivalent for the two
interfaces.

| The real route forward is to dig deeper into run-time code generation.

In theory, I agree (and I think it would be a really fun project).  In
practice, I've always turned away from it when tempted, because the
progress of the hardware folks is so overwhelming.  Have you seen what's
possible on cell phones that have been shipping since the beginning of
2005?  Amazing.  On low-cost power-constrained devices, a market where
it was sometimes claimed that acceleration wouldn't be practical.

| BTW, Alan, when are you going to start writing code again? :)

Yeah, certain other IBM people have been on my case, too.  They're
largely to blame for the fact that I'm in this discussion at all.  :-)

Allen
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Alan Cox
On Iau, 2005-09-01 at 08:00 +0200, Antonio Vargas wrote:
 2. whole screen z-buffer, for depth comparison between the pixels
 generated from each window.

That one I question in part - if the rectangles are (as is typically the
case) large then the Z buffer just ups the memory accesses. I guess for
round windows it might be handy.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Antonio Vargas
On 9/1/05, Alan Cox [EMAIL PROTECTED] wrote:
 On Iau, 2005-09-01 at 08:00 +0200, Antonio Vargas wrote:
  2. whole screen z-buffer, for depth comparison between the pixels
  generated from each window.
 
 That one I question in part - if the rectangles are (as is typically the
 case) large then the Z buffer just ups the memory accesses. I guess for
 round windows it might be handy.
 

There are multiple ways to enhance the speed for zbuffer:

1. Use an hierarchical z-buffer

Divide the screen in 16x16 pixel tiles, and then a per-tile minimum
value. When rendering a poly, you first check the tile-z against the
poly-z and if it fails you can skip 256 pixels in one go.

2. Use scanline-major rendering:

for_each_scanline{
  clear_z_for_scanline();
  for_each_polygon{
draw_pixels_for_current_polygon_and scanline();
  }
}

This is easily done by modeling the scanliner with a coroutine for each polygon
to be painted. The zbuffer is reduced to a scanline and is reused for
all scanlines,
so it's rather fast :)

-- 
Greetz, Antonio Vargas aka winden of network

http://wind.codepixel.com/

Las cosas no son lo que parecen, excepto cuando parecen lo que si son.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Brian Paul

Just a few comments...

Keith Packard wrote:


Again, the question is whether a mostly-software OpenGL implementation
can effectively compete against the simple X+Render graphics model for
basic 2D application operations, and whether there are people interested
in even trying to make this happen.


I don't know of anyone who's writen a 2D-centric Mesa driver, but 
it's feasible.  The basic idea would be to simply fast-path a handful 
of basic OpenGL paths that correspond to the basic X operations:


1. Solid rect fill: glScissor + glClear
2. Blit/copy: glCopyPixels
3. Monochrome glyphs: glBitmap
4. PutImage: glDrawPixels

Those OpenGL commands could be directly implemented with whatever 
mechanism is used in conventional X drivers.  I don't think the 
overhead of going through the OpenGL/Mesa API would be significant.


If Xgl used those commands and you didn't turn on fancy blending, etc. 
the performance should be fine.  If the hardware supported blending, 
that could be easily exposed too.  The rest of OpenGL would go through 
the usual software paths (slow, but better than nothing).


It might be an interesting project for someone.  After one driver was 
done subsequent ones should be fairly easy.




|... However, at the
| application level, GL is not a very friendly 2D application-level API.

The point of OpenGL is to expose what the vast majority of current
display hardware does well, and not a lot more.  So if a class of apps
isn't happy with the functionality that OpenGL provides, it won't be
happy with the functionality that any other low-level API provides.  The
problem lies with the hardware.



Not currently; the OpenGL we have today doesn't provide for
component-level compositing or off-screen drawable objects. The former
is possible in much modern hardware, and may be exposed in GL through
pixel shaders, while the latter spent far too long mired in the ARB and
is only now on the radar for implementation in our environment.

Off-screen drawing is the dominant application paradigm in the 2D world,
so we can't function without it while component-level compositing
provides superior text presentation on LCD screens, which is an
obviously increasing segment of the market.


Yeah, we really need to make some progress with off-screen rendering 
in our drivers (either Pbuffers or renderbuffers).  I've been working 
on renderbuffers but we really need that overdue memory manager.




Jon's right about this:  If you can accelerate a given simple function
(blending, say) for a 2D driver, you can accelerate that same function
in a Mesa driver for a comparable amount of effort, and deliver a
similar benefit to apps.  (More apps, in fact, since it helps
OpenGL-based apps as well as Cairo-based apps.)


Yes, you *can*, but the amount of code needed to perform simple
pixel-aligned upright blends is a tiny fraction of that needed to deal
with filtering textures and *then* blending. All of the compositing code
needed for the Render extension, including accelerated (MMX) is
implemented in 10K LOC. Optimizing a new case generally involves writing
about 50 lines of code or so.


If the blending is for screen-aligned rects, glDrawPixels would be a 
far easier path to optimize than texturing.  The number of state 
combinations related to texturing is pretty overwhelming.



Anyway, I think we're all in agreement about the desirability of 
having a single, unified driver in the future.


-Brian
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Alan Cox
On Iau, 2005-09-01 at 09:24 -0600, Brian Paul wrote:
 If the blending is for screen-aligned rects, glDrawPixels would be a 
 far easier path to optimize than texturing.  The number of state 
 combinations related to texturing is pretty overwhelming.

As doom showed however once you can cut down some of the combinations
particularly if you know the texture orientation is limited you can
really speed it up.

Blending is going to end up using textures onto flat surfaces facing the
viewer which are not rotated or skewed.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Jim Gettys
On Thu, 2005-09-01 at 09:24 -0600, Brian Paul wrote:

 
 If the blending is for screen-aligned rects, glDrawPixels would be a 
 far easier path to optimize than texturing.  The number of state 
 combinations related to texturing is pretty overwhelming.
 
 
 Anyway, I think we're all in agreement about the desirability of 
 having a single, unified driver in the future.
 

Certainly for most hardware in the developed world I think we all agree
with this. The argument is about when we get to one driver model, not if
we get there, and not what the end state is.

In my view, the battle is on legacy systems and the very low end, not in
hardware we hackers use that might run Windows Vista or Mac OS X  

I've had the (friendly) argument with Allen Akin for 15 years that due
to reduction of hardware costs we can't presume OpenGL.  Someday, he'll
be right, and I'll be wrong.  I'm betting I'll be right for a few more
years, and I nothing would tickle me pink more to lose the argument
soon...

Legacy hardware and that being proposed/built for the developing world
is tougher; we have code in hand for existing chips, and the price point
is even well below cell phones on those devices. They don't have
anything beyond basic blit and, miracles of miracles, alpha blending.
These are built on one or two generation back fabs, again for cost.
And as there are no carriers subsidizing the hardware cost, the real
hardware cost has to be met, at very low price points.  They don't come
with the features Allen admires in the latest cell phone chips.

I think the onus of proof that we can immediately completely ditch a
second driver framework in favor of everything being OpenGL, even a
software tuned one, is in my view on those who claim that is viable.
Waving one's hands and claiming there are 100 kbyte closed source
OpenGL/ES implementations doesn't cut it in my view, given where we are
today with the code we already have in hand.  So far, the case hasn't
been made.

Existence proof that we're wrong and can move *entirely* to OpenGL
sooner rather than later would be gratefully accepted..
Regards,
Jim


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Brian Paul

Alan Cox wrote:

On Iau, 2005-09-01 at 09:24 -0600, Brian Paul wrote:

If the blending is for screen-aligned rects, glDrawPixels would be a 
far easier path to optimize than texturing.  The number of state 
combinations related to texturing is pretty overwhelming.



As doom showed however once you can cut down some of the combinations
particularly if you know the texture orientation is limited you can
really speed it up.

Blending is going to end up using textures onto flat surfaces facing the
viewer which are not rotated or skewed.


Hi Alan,

It's other (non-orientation) texture state I had in mind:

- the texel format (OpenGL has over 30 possible texture formats).
- texture size and borders
- the filtering mode (linear, nearest, etc)
- coordinate wrap mode (clamp, repeat, etc)
- env/combine mode
- multi-texture state

It basically means that the driver may have to do state checks similar 
to this to determine if it can use optimized code.  An excerpt from Mesa:


 if (ctx-Texture._EnabledCoordUnits == 0x1
  !ctx-FragmentProgram._Active
  ctx-Texture.Unit[0]._ReallyEnabled == TEXTURE_2D_BIT
  texObj2D-WrapS == GL_REPEAT
  texObj2D-WrapT == GL_REPEAT
  texObj2D-_IsPowerOfTwo
  texImg-Border == 0
  texImg-Width == texImg-RowStride
  (format == MESA_FORMAT_RGB || format == MESA_FORMAT_RGBA)
  minFilter == magFilter
  ctx-Light.Model.ColorControl == GL_SINGLE_COLOR
  ctx-Texture.Unit[0].EnvMode != GL_COMBINE_EXT) {
if (ctx-Hint.PerspectiveCorrection==GL_FASTEST) {
   if (minFilter == GL_NEAREST
format == MESA_FORMAT_RGB
(envMode == GL_REPLACE || envMode == GL_DECAL)
((swrast-_RasterMask == (DEPTH_BIT | TEXTURE_BIT)
 ctx-Depth.Func == GL_LESS
 ctx-Depth.Mask == GL_TRUE)
   || swrast-_RasterMask == TEXTURE_BIT)
ctx-Polygon.StippleFlag == GL_FALSE
ctx-Visual.depthBits = 16) {
  if (swrast-_RasterMask == (DEPTH_BIT | TEXTURE_BIT)) {
 USE(simple_z_textured_triangle);
  }
  else {
 USE(simple_textured_triangle);
  }
   }
 [...]

That's pretty ugly.  Plus the rasterization code for textured 
triangles is fairly complicated.


But the other significant problem is the application has to be sure it 
has set all the GL state correctly so that the fast path is really 
used.  If it gets one thing wrong, you may be screwed.  If different 
drivers optimize slightly different paths, that's another problem.


glDrawPixels would be simpler for both the implementor and user.

-Brian
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Andreas Hauser
jg wrote @ Thu, 01 Sep 2005 11:59:33 -0400:

 Legacy hardware and that being proposed/built for the developing world
 is tougher; we have code in hand for existing chips, and the price point
 is even well below cell phones on those devices. They don't have
 anything beyond basic blit and, miracles of miracles, alpha blending.
 These are built on one or two generation back fabs, again for cost.
 And as there are no carriers subsidizing the hardware cost, the real
 hardware cost has to be met, at very low price points.  They don't come
 with the features Allen admires in the latest cell phone chips.

So you suggest, that we, that have capable cards, which can be had for
 50 Euro here, see that we find something better than X.org to run
on them because X.org is concentrating on  10 Euro chips?
Somehow i always thought that older xfree86 trees were just fine for them.

Andy
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brian Paul wrote:

 It's other (non-orientation) texture state I had in mind:
 
 - the texel format (OpenGL has over 30 possible texture formats).
 - texture size and borders
 - the filtering mode (linear, nearest, etc)
 - coordinate wrap mode (clamp, repeat, etc)
 - env/combine mode
 - multi-texture state

Which is why it's such a good target for code generation.  You'd
generate the texel fetch routine, use that to generate the wraped texel
fetch routine, use that to generate the filtered texel fetch routine,
use that to generate the env/combine routines.

Once-upon-a-time I had the first part and some of the second part
written.  Doing just that little bit was slightly faster on a Pentium 3
and slightly slower on a Pentium 4.  I suspect the problem was that I
wasn't caching the generated code smart enough, so it was it trashing
the CPU cache.  The other problem is that, in the absence of an
assembler in Mesa, it was really painful to change the code stubs.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFDFziUX1gOwKyEAw8RAhmFAJ9QJ7RTrB2dHV/hwb8ktwLyqKSM4wCdGtbS
b0A2N2jFcLeg8HRm53jMyrI=
=Ygkd
-END PGP SIGNATURE-
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Keith Whitwell

Ian Romanick wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Brian Paul wrote:



It's other (non-orientation) texture state I had in mind:

- the texel format (OpenGL has over 30 possible texture formats).
- texture size and borders
- the filtering mode (linear, nearest, etc)
- coordinate wrap mode (clamp, repeat, etc)
- env/combine mode
- multi-texture state



Which is why it's such a good target for code generation.  You'd
generate the texel fetch routine, use that to generate the wraped texel
fetch routine, use that to generate the filtered texel fetch routine,
use that to generate the env/combine routines.

Once-upon-a-time I had the first part and some of the second part
written.  Doing just that little bit was slightly faster on a Pentium 3
and slightly slower on a Pentium 4.  I suspect the problem was that I
wasn't caching the generated code smart enough, so it was it trashing
the CPU cache.  The other problem is that, in the absence of an
assembler in Mesa, it was really painful to change the code stubs.


Note that the last part is now partially addressed at least - Mesa has 
an integrated and simple runtime assembler for x86 and sse.  There are 
some missing pieces and rough edges, but it's working and useful as it 
stands.


Keith
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Jim Gettys
Not at all.

We're pursuing two courses of action right now, that are not mutually
exclusive.

Jon Smirl's argument is that we can satisfy both needs simultaneously
with a GL only strategy, and that doing two is counter productive,
primarily on available resource grounds.

My point is that I don't think the case has (yet) been made to put all
eggs into that one basket, and that some of the arguments presented for
that course of action don't hold together.

- Jim

On Thu, 2005-09-01 at 16:39 +, Andreas Hauser wrote:
 jg wrote @ Thu, 01 Sep 2005 11:59:33 -0400:
 
  Legacy hardware and that being proposed/built for the developing world
  is tougher; we have code in hand for existing chips, and the price point
  is even well below cell phones on those devices. They don't have
  anything beyond basic blit and, miracles of miracles, alpha blending.
  These are built on one or two generation back fabs, again for cost.
  And as there are no carriers subsidizing the hardware cost, the real
  hardware cost has to be met, at very low price points.  They don't come
  with the features Allen admires in the latest cell phone chips.
 
 So you suggest, that we, that have capable cards, which can be had for
  50 Euro here, see that we find something better than X.org to run
 on them because X.org is concentrating on  10 Euro chips?
 Somehow i always thought that older xfree86 trees were just fine for them.
 
 Andy

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Allen Akin
On Wed, Aug 31, 2005 at 08:59:23PM -0700, Keith Packard wrote:
| 
| Yeah, two systems, but (I hope) only one used for each card. So far, I'm
| not sure of the value of attempting to provide a mostly-software GL
| implementation in place of existing X drivers.

For the short term it's valuable for the apps that use OpenGL directly.
Games, of course, on platforms from cell-phone/PDA complexity up; also
things like avatar-based user interfaces.  On desktop platforms, plenty
of non-game OpenGL-based apps exist in the Windows world and I'd expect
those will migrate to Linux as the Linux desktop market grows enough to
be commercially viable.  R128-class hardware is fast enough to be useful
for many non-game apps.

For the long term, you have to decide how likely it is that demands for
new functionality on old platforms will arise.  Let's assume for the
moment that they do.  If OpenGL is available, we have the option to use
it.  If OpenGL isn't available, we have to go through another iteration
of the process we're in now, and grow Render (or some new extensions)
with consequent duplication of effort and/or competition for resources.

| I continue to work on devices for which 3D isn't going to happen.  My
| most recent window system runs on a machine with only 384K of memory...

I'm envious -- sounds like a great project.  But such systems aren't
representative of the vast majority of hardware for which we're building
Render and EXA implementations today.  (Nor are they representative of
the hardware on which most Gnome or KDE apps would run, I suspect.)  I
question how much influence should they have over our core graphics
strategy.

| Again, the question is whether a mostly-software OpenGL implementation
| can effectively compete against the simple X+Render graphics model for
| basic 2D application operations...

I think it's pretty clear that it can, since the few operations we want
to accelerate already fit within the OpenGL framework.  

(I just felt a bit of deja vu over this -- I heard eerily similar
arguments from Microsoft when the first versions of Direct3D were
created.)

|   ...and whether there are people interested
| in even trying to make this happen.

In the commercial world people believe such a thing is valuable, and
it's already happened.  (See, for example,
http://www.hybrid.fi/main/esframework/tools.php).

Why hasn't it happened in the Open Source world?  Well, I'd argue it's
largely because we chose to put our limited resources behind projects
inside the X server instead.

|  The point of OpenGL is to expose what the vast majority of current
|  display hardware does well, and not a lot more.  So if a class of apps
|  isn't happy with the functionality that OpenGL provides, it won't be
|  happy with the functionality that any other low-level API provides.  The
|  problem lies with the hardware.
| 
| Not currently; the OpenGL we have today doesn't provide for
| component-level compositing or off-screen drawable objects. The former
| is possible in much modern hardware, and may be exposed in GL through
| pixel shaders, while the latter spent far too long mired in the ARB and
| is only now on the radar for implementation in our environment.

Component-level compositing:  Current and past hardware doesn't support
it, so even if you create a new low-level API for it you won't get
acceleration.  You can, however, use a multipass algorithm (as Glitz
does) and get acceleration for it through OpenGL even on marginal old
hardware.  I'd guess that the latter is much more likely to satisfy app
developers than the former (and that's the point I was trying to make
above).

Off-screen drawable objects:  PBuffers are offscreen drawable objects
that have existed in OpenGL since 1995 (if I remember correctly).
Extensions exist to allow using them as textures, too.  We simply chose
to implement an entirely new mechanism for offscreen rendering rather
than putting our resources into implementing a spec that already
existed.

| So, my motivation for moving to GL drivers is far more about providing
| drivers for closed source hardware and reducing developer effort needed
| to support new hardware ...

I agree that these are extremely important.

|  ...than it is about making the desktop graphics
| faster or more fancy.

Some people do feel otherwise on that point. :-)

| The bulk of 2D applications need to paint solid rectangles, display a
| couple of images with a bit of scaling and draw some text.

Cairo does a lot more than that, so it would seem that we expect that
situation to change (for example, as SVG gains traction).

Aside:  [I know you know this, but I just want to call it out for any
reader who hasn't considered it before.]  You can almost never base a
design on just the most common operations; infrequent operations matter
too, if they're sufficiently expensive.  For example, in a given desktop
scene glyph drawing commands might outnumber 

Re: State of Linux graphics

2005-09-01 Thread Jon Smirl
On 9/1/05, Jim Gettys [EMAIL PROTECTED] wrote:
 Not at all.
 
 We're pursuing two courses of action right now, that are not mutually
 exclusive.
 
 Jon Smirl's argument is that we can satisfy both needs simultaneously
 with a GL only strategy, and that doing two is counter productive,
 primarily on available resource grounds.
 
 My point is that I don't think the case has (yet) been made to put all
 eggs into that one basket, and that some of the arguments presented for
 that course of action don't hold together.

We're not putting all of our eggs in one basket, you keep forgetting
that we already have a server that supports all of the currently
existing hardware. The question is where do we want to put our future
eggs.

-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread Sean
On Thu, September 1, 2005 4:38 pm, Jon Smirl said:

 We're not putting all of our eggs in one basket, you keep forgetting
 that we already have a server that supports all of the currently
 existing hardware. The question is where do we want to put our future
 eggs.

Amen!   All these arguments that we can't support an advanced future
design unless the new design also supports $10 third world video cards too
is a complete red herring.

Sean



-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-09-01 Thread rep stsb
svgalib is spelled svgalib

I have started writing a windowing program which
uses svgalib. The source code is available at,

http://sourceforge.net/projects/svgalib-windows
 
select browse cvs. SourceForge is rebuilding their
site, so some things don't work.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Keith Packard
On Wed, 2005-08-31 at 18:58 -0700, Allen Akin wrote:
> On Wed, Aug 31, 2005 at 02:06:54PM -0700, Keith Packard wrote:
> | On Wed, 2005-08-31 at 13:06 -0700, Allen Akin wrote:
> | > ...
> | 
> | Right, the goal is to have only one driver for the hardware, whether an
> | X server for simple 2D only environments or a GL driver for 2D/3D
> | environments. ...
> 
> I count two drivers there; I was hoping the goal was for one. :-)

Yeah, two systems, but (I hope) only one used for each card. So far, I'm
not sure of the value of attempting to provide a mostly-software GL
implementation in place of existing X drivers.

> |   ... I think the only questions here are about the road from
> | where we are to that final goal.
> 
> Well there are other questions, including whether it's correct to
> partition the world into "2D only" and "2D/3D" environments.  There are
> many disadvantages and few advantages (that I can see) for doing so.

I continue to work on devices for which 3D isn't going to happen.  My
most recent window system runs on a machine with only 384K of memory,
and yet supports a reasonable facsimile of a linux desktop environment.

In the 'real world', we have linux machines continuing to move
"down-market" with a target price of $100. At this price point, it is
reasonable to look at what are now considered 'embedded' graphics
controllers with no acceleration other than simple copies and fills.

Again, the question is whether a mostly-software OpenGL implementation
can effectively compete against the simple X+Render graphics model for
basic 2D application operations, and whether there are people interested
in even trying to make this happen.

> |... However, at the
> | application level, GL is not a very friendly 2D application-level API.
> 
> The point of OpenGL is to expose what the vast majority of current
> display hardware does well, and not a lot more.  So if a class of apps
> isn't "happy" with the functionality that OpenGL provides, it won't be
> happy with the functionality that any other low-level API provides.  The
> problem lies with the hardware.

Not currently; the OpenGL we have today doesn't provide for
component-level compositing or off-screen drawable objects. The former
is possible in much modern hardware, and may be exposed in GL through
pixel shaders, while the latter spent far too long mired in the ARB and
is only now on the radar for implementation in our environment.

Off-screen drawing is the dominant application paradigm in the 2D world,
so we can't function without it while component-level compositing
provides superior text presentation on LCD screens, which is an
obviously increasing segment of the market.

> Conversely, if the apps aren't taking advantage of the functionality
> OpenGL provides, they're not exploiting the opportunities the hardware
> offers.  Of course I'm not saying all apps *must* use all of OpenGL;
> simply that their developers should be aware of exactly what they're
> leaving on the table.  It can make the difference between an app that's
> run-of-the-mill and one that's outstanding.

Most 2D applications aren't all about the presentation on the screen;
right now, we're struggling to just get basic office functionality
provided to the user. The cairo effort is more about making applications
portable to different window systems and printing systems than it is
about bling, although the bling does have a strong pull for some
developers.

So, my motivation for moving to GL drivers is far more about providing
drivers for closed source hardware and reducing developer effort needed
to support new hardware than it is about making the desktop graphics
faster or more fancy.

> "Friendliness" is another matter, and it makes a ton of sense to package
> common functionality in an easier-to-use higher-level library that a lot
> of apps can share.  In this discussion my concern isn't with Cairo, but
> with the number and type of back-end APIs we (driver developers and
> library developers and application developers) have to support.

Right, again the goal is to have only one driver per video card. Right
now we're not there, and the result is that the GL drivers take a back
seat in most environments to the icky X drivers that are required to
provide simple 2D graphics. That's not a happy place to be, and we do
want to solve that as soon as possible.

> | ... GL provides
> | far more functionality than we need for 2D applications being designed
> | and implemented today...
> 
> With the exception of lighting, it seems to me that pretty much all of
> that applies to today's "2D" apps.  It's just a myth that there's "far
> more" functionality in OpenGL than 2D apps can use.  (Especially for
> OpenGL ES, which eliminates legacy cruft from full OpenGL.)

The bulk of 2D applications need to paint solid rectangles, display a
couple of images with a bit of scaling and 

Re: State of Linux graphics

2005-08-31 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Allen Akin wrote:
> On Wed, Aug 31, 2005 at 02:06:54PM -0700, Keith Packard wrote:
> | 
> | ...So far, 3D driver work has proceeded almost entirely on the
> | newest documented hardware that people could get. Going back and
> | spending months optimizing software 3D rendering code so that it works
> | as fast as software 2D code seems like a thankless task.
> 
> Jon's right about this:  If you can accelerate a given simple function
> (blending, say) for a 2D driver, you can accelerate that same function
> in a Mesa driver for a comparable amount of effort, and deliver a
> similar benefit to apps.  (More apps, in fact, since it helps
> OpenGL-based apps as well as Cairo-based apps.)

The difference is that there is a much larger number of state
combinations possible in OpenGL than in something stripped down for
"just 2D".  That can make it more difficult to know where to spend the
time tuning.  I've spent a fair amount of time looking at Mesa's texture
blending code, so I know this to be true.

The real route forward is to dig deeper into run-time code generation.
There are a large number of possible combinations, but they all look
pretty similar.  This is ideal for run-time code gen.  The problem is
that writing correct, tuned assembly for this stuff takes a pretty
experience developer, and writing correct, tuned code generation
routines takes an even more experienced developer.  Experienced and more
experienced developers are, alas, in short supply.

BTW, Alan, when are you going to start writing code again? >:)

> So long as people are encouraged by word and deed to spend their time on
> "2D" drivers, Mesa drivers will be further starved for resources and the
> belief that OpenGL has nothing to offer "2D" apps will become
> self-fulfilling.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFDFnFQX1gOwKyEAw8RAgZsAJ9MoKf+JTX4OGrybrhD+i2axstONgCghwih
/Bln/u55IJb3BMWBwVTA3sk=
=k086
-END PGP SIGNATURE-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Allen Akin
On Wed, Aug 31, 2005 at 02:06:54PM -0700, Keith Packard wrote:
| On Wed, 2005-08-31 at 13:06 -0700, Allen Akin wrote:
| > ...
| 
| Right, the goal is to have only one driver for the hardware, whether an
| X server for simple 2D only environments or a GL driver for 2D/3D
| environments. ...

I count two drivers there; I was hoping the goal was for one. :-)

|   ... I think the only questions here are about the road from
| where we are to that final goal.

Well there are other questions, including whether it's correct to
partition the world into "2D only" and "2D/3D" environments.  There are
many disadvantages and few advantages (that I can see) for doing so.

[I'm going to reorder your comments just a bit to clarify my replies; I
hope that's OK]

|... However, at the
| application level, GL is not a very friendly 2D application-level API.

The point of OpenGL is to expose what the vast majority of current
display hardware does well, and not a lot more.  So if a class of apps
isn't "happy" with the functionality that OpenGL provides, it won't be
happy with the functionality that any other low-level API provides.  The
problem lies with the hardware.

Conversely, if the apps aren't taking advantage of the functionality
OpenGL provides, they're not exploiting the opportunities the hardware
offers.  Of course I'm not saying all apps *must* use all of OpenGL;
simply that their developers should be aware of exactly what they're
leaving on the table.  It can make the difference between an app that's
run-of-the-mill and one that's outstanding.

"Friendliness" is another matter, and it makes a ton of sense to package
common functionality in an easier-to-use higher-level library that a lot
of apps can share.  In this discussion my concern isn't with Cairo, but
with the number and type of back-end APIs we (driver developers and
library developers and application developers) have to support.

| ... GL provides
| far more functionality than we need for 2D applications being designed
| and implemented today...

When I look at OpenGL, I see ways to:

Create geometric primitives
Specify how those primitives are transformed
Apply lighting to objects made of those primitives
Convert geometric primitives to images
Create images
Specify how those images are transformed
Determine which portions of images should be visible
Combine images
Manage the physical resources for implementing this stuff

With the exception of lighting, it seems to me that pretty much all of
that applies to today's "2D" apps.  It's just a myth that there's "far
more" functionality in OpenGL than 2D apps can use.  (Especially for
OpenGL ES, which eliminates legacy cruft from full OpenGL.)

|... picking the right subset and sticking to that is
| our current challenge.

That would be fine with me.  I'm more worried about what Render (plus
EXA?) represents -- a second development path with the actual costs and
opportunity costs I've mentioned before, and if apps become wedded to it
(rather than to a higher level like Cairo), a loss of opportunity to
exploit new features and better performance at the application level.

|  ...The integration of 2D and 3D acceleration into a
| single GL-based system will take longer, largely as we wait for the GL
| drivers to catch up to the requirements of the Xgl implementation that
| we already have.

Like Jon, I'm concerned that the focus on Render and EXA will
simultaneously take resources away from and reduce the motivation for
those drivers.

| I'm not sure we have any significant new extensions to create here;
| we've got a pretty good handle on how X maps to GL and it seems to work
| well enough with suitable existing extensions.

I'm glad to hear it, though a bit surprised.

| This will be an interesting area of research; right now, 2D applications
| are fairly sketchy about the structure of their UIs, so attempting to
| wrap them into more structured models will take some effort.

Game developers have done a surprising amount of work in this area, and
I know of one company deploying this sort of UI on graphics-accelerated
cell phones.  So some practical experience exists, and we should find a
way to tap into it.

| Certainly ensuring that cairo on glitz can be used to paint into an
| arbitrary GL context will go some ways in this direction.

Yep, that's essential.

| ...So far, 3D driver work has proceeded almost entirely on the
| newest documented hardware that people could get. Going back and
| spending months optimizing software 3D rendering code so that it works
| as fast as software 2D code seems like a thankless task.

Jon's right about this:  If you can accelerate a given simple function
(blending, say) for a 2D driver, you can accelerate that same function
in a Mesa driver for a comparable 

Re: State of Linux graphics

2005-08-31 Thread James Cloos
> "Ian" == Ian Romanick <[EMAIL PROTECTED]> writes:

Ian> I'd really like to see a list of areas where OpenGL
Ian> isn't up to snuff for 2D operations. 

Is that OpenVR spec from Khronos a reasonable baseline
for such a list?

-JimC
-- 
James H. Cloos, Jr. <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Keith Packard
On Wed, 2005-08-31 at 13:06 -0700, Allen Akin wrote:
> On Wed, Aug 31, 2005 at 11:29:30AM -0700, Keith Packard wrote:
> | The real goal is to provide a good programming environment for 2D
> | applications, not to push some particular low-level graphics library.
> 
> I think that's a reasonable goal.
> 
> My red flag goes up at the point where the 2D programming environment
> pushes down into device drivers and becomes an OpenGL peer.  That's
> where we risk redundancy of concepts, duplication of engineering effort,
> and potential semantic conflicts.

Right, the goal is to have only one driver for the hardware, whether an
X server for simple 2D only environments or a GL driver for 2D/3D
environments. I think the only questions here are about the road from
where we are to that final goal.

> For just one small example, we now have several ways of specifying the
> format of a pixel and creating source and destination surfaces based on
> a format.  Some of those formats and surfaces can't be used directly by
> Render, and some can't be used directly by OpenGL.  Furthermore, the
> physical resources have to be managed by some chunk of software that
> must now resolve conflicts between two APIs.

As long as Render is capable of exposing enough information about the GL
formats for 2D applications to operate, I think we're fine. GL provides
far more functionality than we need for 2D applications being designed
and implemented today; picking the right subset and sticking to that is
our current challenge.

> The ARB took a heck of a long time getting consensus on the framebuffer
> object extension in OpenGL because image resource management is a
> difficult problem at the hardware level.  By adding a second low-level
> software interface we've made it even harder.  We've also put artificial
> barriers between "2D" clients and useful "3D" functionality, and between
> "3D" clients and useful "2D" functionality.  I don't see that the nature
> of computer graphics really justifies such a separation (and in fact the
> OpenGL designers argued against it almost 15 years ago).

At the hardware level, there is no difference. However, at the
application level, GL is not a very friendly 2D application-level API.
Abstracting 3D hardware functionality to make it paletable to 2D
developers remains the key goal of Render and cairo.

Note that by layering cairo directly on GL rather than the trip through
Render and the X server, one idea was to let application developers use
the cairo API to "paint" on 3D surfaces without creating an intermediate
texture. Passing through the X server and Render will continue to draw
application content to pixels before it is applied to the final screen
geometry.

> So I think better integration is also a reasonable goal.

Current efforts in solving the memory management issues with the DRM
environment should make the actual consumer of that memory irrelevant,
so we can (at least as a temporary measure) run GL and old-style X
applications on the same card and expect them to share memory in a more
integrated fashion. The integration of 2D and 3D acceleration into a
single GL-based system will take longer, largely as we wait for the GL
drivers to catch up to the requirements of the Xgl implementation that
we already have.

> I believe we're doing well with layered implementation strategies like
> Xgl and Glitz.

I've been pleased that our early assertions about Render being
compatible with GL drawing semantics have been borne out in practice,
and that our long term goal of a usable GL-based X server are possible
if not quite ready for prime-time.

>   Where we might do better is in (1) extending OpenGL to
> provide missing functionality, rather than creating peer low-level APIs;

I'm not sure we have any significant new extensions to create here;
we've got a pretty good handle on how X maps to GL and it seems to work
well enough with suitable existing extensions.

> (2) expressing the output of higher-level services in terms of OpenGL
> entities (vertex buffer objects, framebuffer objects including textures,
> shader programs, etc.) so that apps can mix-and-match them and
> scene-graph libraries can optimize their use; 

This will be an interesting area of research; right now, 2D applications
are fairly sketchy about the structure of their UIs, so attempting to
wrap them into more structured models will take some effort.

Certainly ensuring that cairo on glitz can be used to paint into an
arbitrary GL context will go some ways in this direction.

> (3) finishing decent
> OpenGL drivers for small and old hardware to address people's concerns
> about running modern apps on those systems.

The question is whether this is interesting enough to attract developer
resources. So far, 3D driver work has proceeded almost entirely on the
newest documented hardware that people could get. Going back and
spending months optimizing software 3D rendering code so that it works
as fast as software 2D code seems like 

Re: State of Linux graphics

2005-08-31 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Allen Akin wrote:

> I believe we're doing well with layered implementation strategies like
> Xgl and Glitz.  Where we might do better is in (1) extending OpenGL to
> provide missing functionality, rather than creating peer low-level APIs;
> (2) expressing the output of higher-level services in terms of OpenGL
> entities (vertex buffer objects, framebuffer objects including textures,
> shader programs, etc.) so that apps can mix-and-match them and
> scene-graph libraries can optimize their use; (3) finishing decent
> OpenGL drivers for small and old hardware to address people's concerns
> about running modern apps on those systems.

I think that you and I are in total agreement.  I think #1 is the first
big barrier.  The problem is that I haven't seen a concrete list of the
deficiencies in OpenGL.  Before we can even consider how to extend the
API, we need to know where it needs to be extended.

I'd really like to see a list of areas where OpenGL isn't up to snuff
for 2D operations.  Ideally, items on this list would be put in one (or
more) of four categories:  missing (support for the required operation
is flat out missing from OpenGL), cumbersome (OpenGL can do it, but it
requires API acrobatics), ill defined (OpenGL can do it, but the spec
gives implementation enough leeway to make it useless for us), or slow
(OpenGL can do it, but the API overhead kills performance).

Having such a list would give us direction for both #1 and #3 in Alan's
list.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFDFhEmX1gOwKyEAw8RAiypAJwL/3RpnF10NwGX/hMyumPtMwAbcQCeIXWN
QUzBkYEbSXOKrI0MXIO84Pg=
=tPYg
-END PGP SIGNATURE-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Allen Akin
On Wed, Aug 31, 2005 at 11:29:30AM -0700, Keith Packard wrote:
| The real goal is to provide a good programming environment for 2D
| applications, not to push some particular low-level graphics library.

I think that's a reasonable goal.

My red flag goes up at the point where the 2D programming environment
pushes down into device drivers and becomes an OpenGL peer.  That's
where we risk redundancy of concepts, duplication of engineering effort,
and potential semantic conflicts.

For just one small example, we now have several ways of specifying the
format of a pixel and creating source and destination surfaces based on
a format.  Some of those formats and surfaces can't be used directly by
Render, and some can't be used directly by OpenGL.  Furthermore, the
physical resources have to be managed by some chunk of software that
must now resolve conflicts between two APIs.

The ARB took a heck of a long time getting consensus on the framebuffer
object extension in OpenGL because image resource management is a
difficult problem at the hardware level.  By adding a second low-level
software interface we've made it even harder.  We've also put artificial
barriers between "2D" clients and useful "3D" functionality, and between
"3D" clients and useful "2D" functionality.  I don't see that the nature
of computer graphics really justifies such a separation (and in fact the
OpenGL designers argued against it almost 15 years ago).

So I think better integration is also a reasonable goal.

I believe we're doing well with layered implementation strategies like
Xgl and Glitz.  Where we might do better is in (1) extending OpenGL to
provide missing functionality, rather than creating peer low-level APIs;
(2) expressing the output of higher-level services in terms of OpenGL
entities (vertex buffer objects, framebuffer objects including textures,
shader programs, etc.) so that apps can mix-and-match them and
scene-graph libraries can optimize their use; (3) finishing decent
OpenGL drivers for small and old hardware to address people's concerns
about running modern apps on those systems.

Allen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Jim Gettys
Well, I'm sure you'll keep us honest... ;-).
- Jim


On Wed, 2005-08-31 at 12:06 -0700, Allen Akin wrote:
> On Wed, Aug 31, 2005 at 01:48:11PM -0400, Jim Gettys wrote:
> | Certainly replicating OpenGL 2.0's programmability through Render makes
> | no sense at all to me (or most others, I believe/hope).  If you want to
> | use full use of the GPU, I'm happy to say you should be using OpenGL.
> 
> When expressed that way, as a question of whether you're using the GPU
> all-out, I think it's easy for everyone to agree.  But we also need to
> beware of the slippery slope where functionality gets duplicated a piece
> at a time.
> 
> This has already happened in several areas, so it remains a concern.
> For me at least. :-)
> 
> Allen

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Allen Akin
On Wed, Aug 31, 2005 at 01:48:11PM -0400, Jim Gettys wrote:
| Certainly replicating OpenGL 2.0's programmability through Render makes
| no sense at all to me (or most others, I believe/hope).  If you want to
| use full use of the GPU, I'm happy to say you should be using OpenGL.

When expressed that way, as a question of whether you're using the GPU
all-out, I think it's easy for everyone to agree.  But we also need to
beware of the slippery slope where functionality gets duplicated a piece
at a time.

This has already happened in several areas, so it remains a concern.
For me at least. :-)

Allen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Keith Packard
On Tue, 2005-08-30 at 23:33 -0700, Allen Akin wrote:
> On Tue, Aug 30, 2005 at 01:26:53PM -0400, David Reveman wrote:
> | On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
> | > In general, the whole concept of programmable graphics hardware is
> | > not addressed in APIs like xlib and Cairo. This is a very important
> | > point. A major new GPU feature, programmability is simply not
> | > accessible from the current X APIs. OpenGL exposes this
> | > programmability via its shader language.
> | 
> |   ... I don't
> | see why this can't be exposed through the Render extension. ...
> 
> What has always concerned me about this approach is that when you add
> enough functionality to Render or some new X extensions to fully exploit
> previous (much less current and in-development!) generations of GPUs,
> you've essentially duplicated OpenGL 2.0. 

I don't currently see any strong application motivation to provide this
kind of functionality in a general purpose 2D API, and so it wouldn't
make a lot of sense to push this into the 2D-centric X protocols either.

When that changes, we'll want to explore how best to provide that
functionality. One possibility is to transition applications to a pure
GL drawing model, perhaps using glitz as a shim between the 2D and 3D
worlds. That isn't currently practical as our GL implementations are
missing several key features (FBOs, accelerated indirect rendering,
per-component alpha compositing), but those things are all expected to
be fixed at some point.

The real goal is to provide a good programming environment for 2D
applications, not to push some particular low-level graphics library.

-keith



signature.asc
Description: This is a digitally signed message part


Re: State of Linux graphics

2005-08-31 Thread Jon Smirl
On 8/31/05, Jim Gettys <[EMAIL PROTECTED]> wrote:
> Certainly replicating OpenGL 2.0's programmability through Render makes
> no sense at all to me (or most others, I believe/hope).  If you want to
> use full use of the GPU, I'm happy to say you should be using OpenGL.
> - Jim

This is the core point of the article. Graphics hardware is rapidly
expanding on the high end in ways that are not addressed in the
existing X APIs.

The question is, what do we want to do about it? I've made my
proposal, I'd like to hear other people's constructive views on the
subject.

-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Jim Gettys
Certainly replicating OpenGL 2.0's programmability through Render makes
no sense at all to me (or most others, I believe/hope).  If you want to
use full use of the GPU, I'm happy to say you should be using OpenGL.
- Jim


On Tue, 2005-08-30 at 23:33 -0700, Allen Akin wrote:
> On Tue, Aug 30, 2005 at 01:26:53PM -0400, David Reveman wrote:
> | On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
> | > In general, the whole concept of programmable graphics hardware is
> | > not addressed in APIs like xlib and Cairo. This is a very important
> | > point. A major new GPU feature, programmability is simply not
> | > accessible from the current X APIs. OpenGL exposes this
> | > programmability via its shader language.
> | 
> |   ... I don't
> | see why this can't be exposed through the Render extension. ...
> 
> What has always concerned me about this approach is that when you add
> enough functionality to Render or some new X extensions to fully exploit
> previous (much less current and in-development!) generations of GPUs,
> you've essentially duplicated OpenGL 2.0.  You need to identify the
> resources to be managed (framebuffer objects, vertex objects, textures,
> programs of several kinds, etc.); explain how they're specified and how
> they interact and how they're owned/shared; define a vocabulary of
> commands that operate upon them; think about how those commands are
> translated and executed on various pieces of hardware; examine the
> impact of things like graphics context switching on the system
> architecture; and deal with a dozen other matters that have already been
> addressed fully or partly in the OpenGL world.
> 
> I think it makes a lot of sense to leverage the work that's already been
> done:  Take OpenGL as a given, and add extensions for what's missing.
> Don't create a parallel API that in the long run must develop into
> something at least as rich as OpenGL was to start with.  That costs time
> and effort, and likely won't be supported by the hardware vendors to the
> same extent that OpenGL is (thanks to the commercial forces already at
> work).  Let OpenGL do 80% of the job, then work to provide the last 20%,
> rather than trying to do 100% from scratch.
> 
> Allen
> ___
> xorg mailing list
> [EMAIL PROTECTED]
> http://lists.freedesktop.org/mailman/listinfo/xorg

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread David Reveman
On Tue, 2005-08-30 at 23:33 -0700, Allen Akin wrote:
> On Tue, Aug 30, 2005 at 01:26:53PM -0400, David Reveman wrote:
> | On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
> | > In general, the whole concept of programmable graphics hardware is
> | > not addressed in APIs like xlib and Cairo. This is a very important
> | > point. A major new GPU feature, programmability is simply not
> | > accessible from the current X APIs. OpenGL exposes this
> | > programmability via its shader language.
> | 
> |   ... I don't
> | see why this can't be exposed through the Render extension. ...
> 
> What has always concerned me about this approach is that when you add
> enough functionality to Render or some new X extensions to fully exploit
> previous (much less current and in-development!) generations of GPUs,
> you've essentially duplicated OpenGL 2.0.  You need to identify the
> resources to be managed (framebuffer objects, vertex objects, textures,
> programs of several kinds, etc.); explain how they're specified and how
> they interact and how they're owned/shared; define a vocabulary of
> commands that operate upon them; think about how those commands are
> translated and executed on various pieces of hardware; examine the
> impact of things like graphics context switching on the system
> architecture; and deal with a dozen other matters that have already been
> addressed fully or partly in the OpenGL world.
> 
> I think it makes a lot of sense to leverage the work that's already been
> done:  Take OpenGL as a given, and add extensions for what's missing.
> Don't create a parallel API that in the long run must develop into
> something at least as rich as OpenGL was to start with.  That costs time
> and effort, and likely won't be supported by the hardware vendors to the
> same extent that OpenGL is (thanks to the commercial forces already at
> work).  Let OpenGL do 80% of the job, then work to provide the last 20%,
> rather than trying to do 100% from scratch.

Sounds sane. This is actually what I've done in Xgl to make it possible
to write some more interesting compositing managers. I implemented
GLX_MESA_render_texture so that a compositing manager can bind
redirected windows to textures and draw the screen using OpenGL.

Using OpenGL instead of X Render might very well be the way we end up
doing things. The current X Render API is sufficient for even more
complex cairo applications and that's good as they can then be
accelerated on servers without GL support. But you're probably right,
the next time we think of extending X Render we might want to reconsider
if that's such a good idea. If it's not likely that anything but a
OpenGL based server will accelerate it, then it might be a bad idea to
add it to X Render.

-David

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Jon Smirl
On 8/31/05, Eric Anholt <[EMAIL PROTECTED]> wrote:
> the X Render extension."  No, EXA is a different acceleration
> architecture making different basic design decisions related to memory
> management and driver API.

I did start the EXA section off with this: "EXA replaces the existing
2D XAA drivers allowing the current server model to work a while
longer."

I'll edit the article to help clarify these points but Daniel has
disabled my login at fd.o so I can't alter the article.

> 
> "If the old hardware is missing the hardware needed to accelerate render
> there is nothing EXA can do to help."  Better memory management allows
> for better performance with composite due to improved placement of
> pixmaps, which XAA doesn't do.  So EXA can help.
> 
> "So it ends up that the hardware EXA works on is the same hardware we
> already had existing OpenGL drivers for."  No.  See, for example, the nv
> or i128 driver ports, both completed in very short timeframes.
> 
> "The EXA driver programs the 3D hardware from the 2D XAA driver adding
> yet another conflicting user to the long line of programs all trying to
> use the video hardware at the same time."  No, EXA is not an addition to
> XAA, it's a replacement.  It's not "yet another conflicting user" on
> your machine (and I have yet to actually see this purported conflict in
> my usage of either acceleration architecture).
> 
> "There is also a danger that EXA will keep expanding to expose more of
> the chip's 3D capabilities."  If people put effort into this because
> they see value in it, without breaking other people's code, why is this
> a "danger?"
> 
> --
> Eric Anholt [EMAIL PROTECTED]
> http://people.freebsd.org/~anholt/  [EMAIL PROTECTED]
> 
> 
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.1 (FreeBSD)
> 
> iD8DBQBDFUr4HUdvYGzw6vcRAl0SAKCVOCHuVweh5CJoz8UzmkTqNxrEuwCfU/t0
> BJVf4HCTUJGn/g4JtsQO0Ds=
> =tWVr
> -END PGP SIGNATURE-
> 
> 
> 


-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Anshuman Gholap
i have no freaking idea of coding. but going throught this "Discuss
issues related to the xorg tree" talk, made me feel and say "mommy
(coder), daddy (admin),  please dont fight you tearing
us(users) apart."

glad its worked out.. all i hope is we(users) , now , get some nice
fast xorg :D.

Regards,
Anshuman.
Host server admin.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Allen Akin
On Tue, Aug 30, 2005 at 01:26:53PM -0400, David Reveman wrote:
| On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
| > In general, the whole concept of programmable graphics hardware is
| > not addressed in APIs like xlib and Cairo. This is a very important
| > point. A major new GPU feature, programmability is simply not
| > accessible from the current X APIs. OpenGL exposes this
| > programmability via its shader language.
| 
|   ... I don't
| see why this can't be exposed through the Render extension. ...

What has always concerned me about this approach is that when you add
enough functionality to Render or some new X extensions to fully exploit
previous (much less current and in-development!) generations of GPUs,
you've essentially duplicated OpenGL 2.0.  You need to identify the
resources to be managed (framebuffer objects, vertex objects, textures,
programs of several kinds, etc.); explain how they're specified and how
they interact and how they're owned/shared; define a vocabulary of
commands that operate upon them; think about how those commands are
translated and executed on various pieces of hardware; examine the
impact of things like graphics context switching on the system
architecture; and deal with a dozen other matters that have already been
addressed fully or partly in the OpenGL world.

I think it makes a lot of sense to leverage the work that's already been
done:  Take OpenGL as a given, and add extensions for what's missing.
Don't create a parallel API that in the long run must develop into
something at least as rich as OpenGL was to start with.  That costs time
and effort, and likely won't be supported by the hardware vendors to the
same extent that OpenGL is (thanks to the commercial forces already at
work).  Let OpenGL do 80% of the job, then work to provide the last 20%,
rather than trying to do 100% from scratch.

Allen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Eric Anholt
On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
> I've written an article that surveys the current State of Linux
> graphics and proposes a possible path forward. This is a long article
> containing a lot of detailed technical information as a guide to
> future developers. Skip over the detailed parts if they aren't
> relevant to your area of work.
> 
> http://www.freedesktop.org/~jonsmirl/graphics.html
> 
> Topics include the current X server, framebuffer, Xgl, graphics
> drivers, multiuser support, using the GPU, and a new server design.
> Hopefully it will help you fill in the pieces and build an overall
> picture of the graphics landscape.
> 
> The article has been reviewed but if it still contains technical
> errors please let me know. Opinions on the content are also
> appreciated.

"EXA extends the XAA driver concept to use the 3D hardware to accelerate
the X Render extension."  No, EXA is a different acceleration
architecture making different basic design decisions related to memory
management and driver API.

"If the old hardware is missing the hardware needed to accelerate render
there is nothing EXA can do to help."  Better memory management allows
for better performance with composite due to improved placement of
pixmaps, which XAA doesn't do.  So EXA can help.

"So it ends up that the hardware EXA works on is the same hardware we
already had existing OpenGL drivers for."  No.  See, for example, the nv
or i128 driver ports, both completed in very short timeframes.

"The EXA driver programs the 3D hardware from the 2D XAA driver adding
yet another conflicting user to the long line of programs all trying to
use the video hardware at the same time."  No, EXA is not an addition to
XAA, it's a replacement.  It's not "yet another conflicting user" on
your machine (and I have yet to actually see this purported conflict in
my usage of either acceleration architecture).

"There is also a danger that EXA will keep expanding to expose more of
the chip’s 3D capabilities."  If people put effort into this because
they see value in it, without breaking other people's code, why is this
a "danger?"

-- 
Eric Anholt [EMAIL PROTECTED]
http://people.freebsd.org/~anholt/  [EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part


Re: State of Linux graphics

2005-08-31 Thread Eric Anholt
On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
 I've written an article that surveys the current State of Linux
 graphics and proposes a possible path forward. This is a long article
 containing a lot of detailed technical information as a guide to
 future developers. Skip over the detailed parts if they aren't
 relevant to your area of work.
 
 http://www.freedesktop.org/~jonsmirl/graphics.html
 
 Topics include the current X server, framebuffer, Xgl, graphics
 drivers, multiuser support, using the GPU, and a new server design.
 Hopefully it will help you fill in the pieces and build an overall
 picture of the graphics landscape.
 
 The article has been reviewed but if it still contains technical
 errors please let me know. Opinions on the content are also
 appreciated.

EXA extends the XAA driver concept to use the 3D hardware to accelerate
the X Render extension.  No, EXA is a different acceleration
architecture making different basic design decisions related to memory
management and driver API.

If the old hardware is missing the hardware needed to accelerate render
there is nothing EXA can do to help.  Better memory management allows
for better performance with composite due to improved placement of
pixmaps, which XAA doesn't do.  So EXA can help.

So it ends up that the hardware EXA works on is the same hardware we
already had existing OpenGL drivers for.  No.  See, for example, the nv
or i128 driver ports, both completed in very short timeframes.

The EXA driver programs the 3D hardware from the 2D XAA driver adding
yet another conflicting user to the long line of programs all trying to
use the video hardware at the same time.  No, EXA is not an addition to
XAA, it's a replacement.  It's not yet another conflicting user on
your machine (and I have yet to actually see this purported conflict in
my usage of either acceleration architecture).

There is also a danger that EXA will keep expanding to expose more of
the chip’s 3D capabilities.  If people put effort into this because
they see value in it, without breaking other people's code, why is this
a danger?

-- 
Eric Anholt [EMAIL PROTECTED]
http://people.freebsd.org/~anholt/  [EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part


Re: State of Linux graphics

2005-08-31 Thread Allen Akin
On Tue, Aug 30, 2005 at 01:26:53PM -0400, David Reveman wrote:
| On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
|  In general, the whole concept of programmable graphics hardware is
|  not addressed in APIs like xlib and Cairo. This is a very important
|  point. A major new GPU feature, programmability is simply not
|  accessible from the current X APIs. OpenGL exposes this
|  programmability via its shader language.
| 
|   ... I don't
| see why this can't be exposed through the Render extension. ...

What has always concerned me about this approach is that when you add
enough functionality to Render or some new X extensions to fully exploit
previous (much less current and in-development!) generations of GPUs,
you've essentially duplicated OpenGL 2.0.  You need to identify the
resources to be managed (framebuffer objects, vertex objects, textures,
programs of several kinds, etc.); explain how they're specified and how
they interact and how they're owned/shared; define a vocabulary of
commands that operate upon them; think about how those commands are
translated and executed on various pieces of hardware; examine the
impact of things like graphics context switching on the system
architecture; and deal with a dozen other matters that have already been
addressed fully or partly in the OpenGL world.

I think it makes a lot of sense to leverage the work that's already been
done:  Take OpenGL as a given, and add extensions for what's missing.
Don't create a parallel API that in the long run must develop into
something at least as rich as OpenGL was to start with.  That costs time
and effort, and likely won't be supported by the hardware vendors to the
same extent that OpenGL is (thanks to the commercial forces already at
work).  Let OpenGL do 80% of the job, then work to provide the last 20%,
rather than trying to do 100% from scratch.

Allen
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Anshuman Gholap
i have no freaking idea of coding. but going throught this Discuss
issues related to the xorg tree talk, made me feel and say mommy
(coder), daddy (admin),  please dont fight you tearing
us(users) apart.

glad its worked out.. all i hope is we(users) , now , get some nice
fast xorg :D.

Regards,
Anshuman.
Host server admin.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Jon Smirl
On 8/31/05, Eric Anholt [EMAIL PROTECTED] wrote:
 the X Render extension.  No, EXA is a different acceleration
 architecture making different basic design decisions related to memory
 management and driver API.

I did start the EXA section off with this: EXA replaces the existing
2D XAA drivers allowing the current server model to work a while
longer.

I'll edit the article to help clarify these points but Daniel has
disabled my login at fd.o so I can't alter the article.

 
 If the old hardware is missing the hardware needed to accelerate render
 there is nothing EXA can do to help.  Better memory management allows
 for better performance with composite due to improved placement of
 pixmaps, which XAA doesn't do.  So EXA can help.
 
 So it ends up that the hardware EXA works on is the same hardware we
 already had existing OpenGL drivers for.  No.  See, for example, the nv
 or i128 driver ports, both completed in very short timeframes.
 
 The EXA driver programs the 3D hardware from the 2D XAA driver adding
 yet another conflicting user to the long line of programs all trying to
 use the video hardware at the same time.  No, EXA is not an addition to
 XAA, it's a replacement.  It's not yet another conflicting user on
 your machine (and I have yet to actually see this purported conflict in
 my usage of either acceleration architecture).
 
 There is also a danger that EXA will keep expanding to expose more of
 the chip's 3D capabilities.  If people put effort into this because
 they see value in it, without breaking other people's code, why is this
 a danger?
 
 --
 Eric Anholt [EMAIL PROTECTED]
 http://people.freebsd.org/~anholt/  [EMAIL PROTECTED]
 
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.1 (FreeBSD)
 
 iD8DBQBDFUr4HUdvYGzw6vcRAl0SAKCVOCHuVweh5CJoz8UzmkTqNxrEuwCfU/t0
 BJVf4HCTUJGn/g4JtsQO0Ds=
 =tWVr
 -END PGP SIGNATURE-
 
 
 


-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread David Reveman
On Tue, 2005-08-30 at 23:33 -0700, Allen Akin wrote:
 On Tue, Aug 30, 2005 at 01:26:53PM -0400, David Reveman wrote:
 | On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
 |  In general, the whole concept of programmable graphics hardware is
 |  not addressed in APIs like xlib and Cairo. This is a very important
 |  point. A major new GPU feature, programmability is simply not
 |  accessible from the current X APIs. OpenGL exposes this
 |  programmability via its shader language.
 | 
 |   ... I don't
 | see why this can't be exposed through the Render extension. ...
 
 What has always concerned me about this approach is that when you add
 enough functionality to Render or some new X extensions to fully exploit
 previous (much less current and in-development!) generations of GPUs,
 you've essentially duplicated OpenGL 2.0.  You need to identify the
 resources to be managed (framebuffer objects, vertex objects, textures,
 programs of several kinds, etc.); explain how they're specified and how
 they interact and how they're owned/shared; define a vocabulary of
 commands that operate upon them; think about how those commands are
 translated and executed on various pieces of hardware; examine the
 impact of things like graphics context switching on the system
 architecture; and deal with a dozen other matters that have already been
 addressed fully or partly in the OpenGL world.
 
 I think it makes a lot of sense to leverage the work that's already been
 done:  Take OpenGL as a given, and add extensions for what's missing.
 Don't create a parallel API that in the long run must develop into
 something at least as rich as OpenGL was to start with.  That costs time
 and effort, and likely won't be supported by the hardware vendors to the
 same extent that OpenGL is (thanks to the commercial forces already at
 work).  Let OpenGL do 80% of the job, then work to provide the last 20%,
 rather than trying to do 100% from scratch.

Sounds sane. This is actually what I've done in Xgl to make it possible
to write some more interesting compositing managers. I implemented
GLX_MESA_render_texture so that a compositing manager can bind
redirected windows to textures and draw the screen using OpenGL.

Using OpenGL instead of X Render might very well be the way we end up
doing things. The current X Render API is sufficient for even more
complex cairo applications and that's good as they can then be
accelerated on servers without GL support. But you're probably right,
the next time we think of extending X Render we might want to reconsider
if that's such a good idea. If it's not likely that anything but a
OpenGL based server will accelerate it, then it might be a bad idea to
add it to X Render.

-David

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Jim Gettys
Certainly replicating OpenGL 2.0's programmability through Render makes
no sense at all to me (or most others, I believe/hope).  If you want to
use full use of the GPU, I'm happy to say you should be using OpenGL.
- Jim


On Tue, 2005-08-30 at 23:33 -0700, Allen Akin wrote:
 On Tue, Aug 30, 2005 at 01:26:53PM -0400, David Reveman wrote:
 | On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
 |  In general, the whole concept of programmable graphics hardware is
 |  not addressed in APIs like xlib and Cairo. This is a very important
 |  point. A major new GPU feature, programmability is simply not
 |  accessible from the current X APIs. OpenGL exposes this
 |  programmability via its shader language.
 | 
 |   ... I don't
 | see why this can't be exposed through the Render extension. ...
 
 What has always concerned me about this approach is that when you add
 enough functionality to Render or some new X extensions to fully exploit
 previous (much less current and in-development!) generations of GPUs,
 you've essentially duplicated OpenGL 2.0.  You need to identify the
 resources to be managed (framebuffer objects, vertex objects, textures,
 programs of several kinds, etc.); explain how they're specified and how
 they interact and how they're owned/shared; define a vocabulary of
 commands that operate upon them; think about how those commands are
 translated and executed on various pieces of hardware; examine the
 impact of things like graphics context switching on the system
 architecture; and deal with a dozen other matters that have already been
 addressed fully or partly in the OpenGL world.
 
 I think it makes a lot of sense to leverage the work that's already been
 done:  Take OpenGL as a given, and add extensions for what's missing.
 Don't create a parallel API that in the long run must develop into
 something at least as rich as OpenGL was to start with.  That costs time
 and effort, and likely won't be supported by the hardware vendors to the
 same extent that OpenGL is (thanks to the commercial forces already at
 work).  Let OpenGL do 80% of the job, then work to provide the last 20%,
 rather than trying to do 100% from scratch.
 
 Allen
 ___
 xorg mailing list
 [EMAIL PROTECTED]
 http://lists.freedesktop.org/mailman/listinfo/xorg

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Jon Smirl
On 8/31/05, Jim Gettys [EMAIL PROTECTED] wrote:
 Certainly replicating OpenGL 2.0's programmability through Render makes
 no sense at all to me (or most others, I believe/hope).  If you want to
 use full use of the GPU, I'm happy to say you should be using OpenGL.
 - Jim

This is the core point of the article. Graphics hardware is rapidly
expanding on the high end in ways that are not addressed in the
existing X APIs.

The question is, what do we want to do about it? I've made my
proposal, I'd like to hear other people's constructive views on the
subject.

-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Keith Packard
On Tue, 2005-08-30 at 23:33 -0700, Allen Akin wrote:
 On Tue, Aug 30, 2005 at 01:26:53PM -0400, David Reveman wrote:
 | On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
 |  In general, the whole concept of programmable graphics hardware is
 |  not addressed in APIs like xlib and Cairo. This is a very important
 |  point. A major new GPU feature, programmability is simply not
 |  accessible from the current X APIs. OpenGL exposes this
 |  programmability via its shader language.
 | 
 |   ... I don't
 | see why this can't be exposed through the Render extension. ...
 
 What has always concerned me about this approach is that when you add
 enough functionality to Render or some new X extensions to fully exploit
 previous (much less current and in-development!) generations of GPUs,
 you've essentially duplicated OpenGL 2.0. 

I don't currently see any strong application motivation to provide this
kind of functionality in a general purpose 2D API, and so it wouldn't
make a lot of sense to push this into the 2D-centric X protocols either.

When that changes, we'll want to explore how best to provide that
functionality. One possibility is to transition applications to a pure
GL drawing model, perhaps using glitz as a shim between the 2D and 3D
worlds. That isn't currently practical as our GL implementations are
missing several key features (FBOs, accelerated indirect rendering,
per-component alpha compositing), but those things are all expected to
be fixed at some point.

The real goal is to provide a good programming environment for 2D
applications, not to push some particular low-level graphics library.

-keith



signature.asc
Description: This is a digitally signed message part


Re: State of Linux graphics

2005-08-31 Thread Allen Akin
On Wed, Aug 31, 2005 at 01:48:11PM -0400, Jim Gettys wrote:
| Certainly replicating OpenGL 2.0's programmability through Render makes
| no sense at all to me (or most others, I believe/hope).  If you want to
| use full use of the GPU, I'm happy to say you should be using OpenGL.

When expressed that way, as a question of whether you're using the GPU
all-out, I think it's easy for everyone to agree.  But we also need to
beware of the slippery slope where functionality gets duplicated a piece
at a time.

This has already happened in several areas, so it remains a concern.
For me at least. :-)

Allen
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Jim Gettys
Well, I'm sure you'll keep us honest... ;-).
- Jim


On Wed, 2005-08-31 at 12:06 -0700, Allen Akin wrote:
 On Wed, Aug 31, 2005 at 01:48:11PM -0400, Jim Gettys wrote:
 | Certainly replicating OpenGL 2.0's programmability through Render makes
 | no sense at all to me (or most others, I believe/hope).  If you want to
 | use full use of the GPU, I'm happy to say you should be using OpenGL.
 
 When expressed that way, as a question of whether you're using the GPU
 all-out, I think it's easy for everyone to agree.  But we also need to
 beware of the slippery slope where functionality gets duplicated a piece
 at a time.
 
 This has already happened in several areas, so it remains a concern.
 For me at least. :-)
 
 Allen

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Allen Akin
On Wed, Aug 31, 2005 at 11:29:30AM -0700, Keith Packard wrote:
| The real goal is to provide a good programming environment for 2D
| applications, not to push some particular low-level graphics library.

I think that's a reasonable goal.

My red flag goes up at the point where the 2D programming environment
pushes down into device drivers and becomes an OpenGL peer.  That's
where we risk redundancy of concepts, duplication of engineering effort,
and potential semantic conflicts.

For just one small example, we now have several ways of specifying the
format of a pixel and creating source and destination surfaces based on
a format.  Some of those formats and surfaces can't be used directly by
Render, and some can't be used directly by OpenGL.  Furthermore, the
physical resources have to be managed by some chunk of software that
must now resolve conflicts between two APIs.

The ARB took a heck of a long time getting consensus on the framebuffer
object extension in OpenGL because image resource management is a
difficult problem at the hardware level.  By adding a second low-level
software interface we've made it even harder.  We've also put artificial
barriers between 2D clients and useful 3D functionality, and between
3D clients and useful 2D functionality.  I don't see that the nature
of computer graphics really justifies such a separation (and in fact the
OpenGL designers argued against it almost 15 years ago).

So I think better integration is also a reasonable goal.

I believe we're doing well with layered implementation strategies like
Xgl and Glitz.  Where we might do better is in (1) extending OpenGL to
provide missing functionality, rather than creating peer low-level APIs;
(2) expressing the output of higher-level services in terms of OpenGL
entities (vertex buffer objects, framebuffer objects including textures,
shader programs, etc.) so that apps can mix-and-match them and
scene-graph libraries can optimize their use; (3) finishing decent
OpenGL drivers for small and old hardware to address people's concerns
about running modern apps on those systems.

Allen
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Allen Akin wrote:

 I believe we're doing well with layered implementation strategies like
 Xgl and Glitz.  Where we might do better is in (1) extending OpenGL to
 provide missing functionality, rather than creating peer low-level APIs;
 (2) expressing the output of higher-level services in terms of OpenGL
 entities (vertex buffer objects, framebuffer objects including textures,
 shader programs, etc.) so that apps can mix-and-match them and
 scene-graph libraries can optimize their use; (3) finishing decent
 OpenGL drivers for small and old hardware to address people's concerns
 about running modern apps on those systems.

I think that you and I are in total agreement.  I think #1 is the first
big barrier.  The problem is that I haven't seen a concrete list of the
deficiencies in OpenGL.  Before we can even consider how to extend the
API, we need to know where it needs to be extended.

I'd really like to see a list of areas where OpenGL isn't up to snuff
for 2D operations.  Ideally, items on this list would be put in one (or
more) of four categories:  missing (support for the required operation
is flat out missing from OpenGL), cumbersome (OpenGL can do it, but it
requires API acrobatics), ill defined (OpenGL can do it, but the spec
gives implementation enough leeway to make it useless for us), or slow
(OpenGL can do it, but the API overhead kills performance).

Having such a list would give us direction for both #1 and #3 in Alan's
list.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFDFhEmX1gOwKyEAw8RAiypAJwL/3RpnF10NwGX/hMyumPtMwAbcQCeIXWN
QUzBkYEbSXOKrI0MXIO84Pg=
=tPYg
-END PGP SIGNATURE-
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Keith Packard
On Wed, 2005-08-31 at 13:06 -0700, Allen Akin wrote:
 On Wed, Aug 31, 2005 at 11:29:30AM -0700, Keith Packard wrote:
 | The real goal is to provide a good programming environment for 2D
 | applications, not to push some particular low-level graphics library.
 
 I think that's a reasonable goal.
 
 My red flag goes up at the point where the 2D programming environment
 pushes down into device drivers and becomes an OpenGL peer.  That's
 where we risk redundancy of concepts, duplication of engineering effort,
 and potential semantic conflicts.

Right, the goal is to have only one driver for the hardware, whether an
X server for simple 2D only environments or a GL driver for 2D/3D
environments. I think the only questions here are about the road from
where we are to that final goal.

 For just one small example, we now have several ways of specifying the
 format of a pixel and creating source and destination surfaces based on
 a format.  Some of those formats and surfaces can't be used directly by
 Render, and some can't be used directly by OpenGL.  Furthermore, the
 physical resources have to be managed by some chunk of software that
 must now resolve conflicts between two APIs.

As long as Render is capable of exposing enough information about the GL
formats for 2D applications to operate, I think we're fine. GL provides
far more functionality than we need for 2D applications being designed
and implemented today; picking the right subset and sticking to that is
our current challenge.

 The ARB took a heck of a long time getting consensus on the framebuffer
 object extension in OpenGL because image resource management is a
 difficult problem at the hardware level.  By adding a second low-level
 software interface we've made it even harder.  We've also put artificial
 barriers between 2D clients and useful 3D functionality, and between
 3D clients and useful 2D functionality.  I don't see that the nature
 of computer graphics really justifies such a separation (and in fact the
 OpenGL designers argued against it almost 15 years ago).

At the hardware level, there is no difference. However, at the
application level, GL is not a very friendly 2D application-level API.
Abstracting 3D hardware functionality to make it paletable to 2D
developers remains the key goal of Render and cairo.

Note that by layering cairo directly on GL rather than the trip through
Render and the X server, one idea was to let application developers use
the cairo API to paint on 3D surfaces without creating an intermediate
texture. Passing through the X server and Render will continue to draw
application content to pixels before it is applied to the final screen
geometry.

 So I think better integration is also a reasonable goal.

Current efforts in solving the memory management issues with the DRM
environment should make the actual consumer of that memory irrelevant,
so we can (at least as a temporary measure) run GL and old-style X
applications on the same card and expect them to share memory in a more
integrated fashion. The integration of 2D and 3D acceleration into a
single GL-based system will take longer, largely as we wait for the GL
drivers to catch up to the requirements of the Xgl implementation that
we already have.

 I believe we're doing well with layered implementation strategies like
 Xgl and Glitz.

I've been pleased that our early assertions about Render being
compatible with GL drawing semantics have been borne out in practice,
and that our long term goal of a usable GL-based X server are possible
if not quite ready for prime-time.

   Where we might do better is in (1) extending OpenGL to
 provide missing functionality, rather than creating peer low-level APIs;

I'm not sure we have any significant new extensions to create here;
we've got a pretty good handle on how X maps to GL and it seems to work
well enough with suitable existing extensions.

 (2) expressing the output of higher-level services in terms of OpenGL
 entities (vertex buffer objects, framebuffer objects including textures,
 shader programs, etc.) so that apps can mix-and-match them and
 scene-graph libraries can optimize their use; 

This will be an interesting area of research; right now, 2D applications
are fairly sketchy about the structure of their UIs, so attempting to
wrap them into more structured models will take some effort.

Certainly ensuring that cairo on glitz can be used to paint into an
arbitrary GL context will go some ways in this direction.

 (3) finishing decent
 OpenGL drivers for small and old hardware to address people's concerns
 about running modern apps on those systems.

The question is whether this is interesting enough to attract developer
resources. So far, 3D driver work has proceeded almost entirely on the
newest documented hardware that people could get. Going back and
spending months optimizing software 3D rendering code so that it works
as fast as software 2D code seems like a thankless task.

And this, unfortunately, 

Re: State of Linux graphics

2005-08-31 Thread James Cloos
 Ian == Ian Romanick [EMAIL PROTECTED] writes:

Ian I'd really like to see a list of areas where OpenGL
Ian isn't up to snuff for 2D operations. 

Is that OpenVR spec from Khronos a reasonable baseline
for such a list?

-JimC
-- 
James H. Cloos, Jr. [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Allen Akin
On Wed, Aug 31, 2005 at 02:06:54PM -0700, Keith Packard wrote:
| On Wed, 2005-08-31 at 13:06 -0700, Allen Akin wrote:
|  ...
| 
| Right, the goal is to have only one driver for the hardware, whether an
| X server for simple 2D only environments or a GL driver for 2D/3D
| environments. ...

I count two drivers there; I was hoping the goal was for one. :-)

|   ... I think the only questions here are about the road from
| where we are to that final goal.

Well there are other questions, including whether it's correct to
partition the world into 2D only and 2D/3D environments.  There are
many disadvantages and few advantages (that I can see) for doing so.

[I'm going to reorder your comments just a bit to clarify my replies; I
hope that's OK]

|... However, at the
| application level, GL is not a very friendly 2D application-level API.

The point of OpenGL is to expose what the vast majority of current
display hardware does well, and not a lot more.  So if a class of apps
isn't happy with the functionality that OpenGL provides, it won't be
happy with the functionality that any other low-level API provides.  The
problem lies with the hardware.

Conversely, if the apps aren't taking advantage of the functionality
OpenGL provides, they're not exploiting the opportunities the hardware
offers.  Of course I'm not saying all apps *must* use all of OpenGL;
simply that their developers should be aware of exactly what they're
leaving on the table.  It can make the difference between an app that's
run-of-the-mill and one that's outstanding.

Friendliness is another matter, and it makes a ton of sense to package
common functionality in an easier-to-use higher-level library that a lot
of apps can share.  In this discussion my concern isn't with Cairo, but
with the number and type of back-end APIs we (driver developers and
library developers and application developers) have to support.

| ... GL provides
| far more functionality than we need for 2D applications being designed
| and implemented today...

When I look at OpenGL, I see ways to:

Create geometric primitives
Specify how those primitives are transformed
Apply lighting to objects made of those primitives
Convert geometric primitives to images
Create images
Specify how those images are transformed
Determine which portions of images should be visible
Combine images
Manage the physical resources for implementing this stuff

With the exception of lighting, it seems to me that pretty much all of
that applies to today's 2D apps.  It's just a myth that there's far
more functionality in OpenGL than 2D apps can use.  (Especially for
OpenGL ES, which eliminates legacy cruft from full OpenGL.)

|... picking the right subset and sticking to that is
| our current challenge.

That would be fine with me.  I'm more worried about what Render (plus
EXA?) represents -- a second development path with the actual costs and
opportunity costs I've mentioned before, and if apps become wedded to it
(rather than to a higher level like Cairo), a loss of opportunity to
exploit new features and better performance at the application level.

|  ...The integration of 2D and 3D acceleration into a
| single GL-based system will take longer, largely as we wait for the GL
| drivers to catch up to the requirements of the Xgl implementation that
| we already have.

Like Jon, I'm concerned that the focus on Render and EXA will
simultaneously take resources away from and reduce the motivation for
those drivers.

| I'm not sure we have any significant new extensions to create here;
| we've got a pretty good handle on how X maps to GL and it seems to work
| well enough with suitable existing extensions.

I'm glad to hear it, though a bit surprised.

| This will be an interesting area of research; right now, 2D applications
| are fairly sketchy about the structure of their UIs, so attempting to
| wrap them into more structured models will take some effort.

Game developers have done a surprising amount of work in this area, and
I know of one company deploying this sort of UI on graphics-accelerated
cell phones.  So some practical experience exists, and we should find a
way to tap into it.

| Certainly ensuring that cairo on glitz can be used to paint into an
| arbitrary GL context will go some ways in this direction.

Yep, that's essential.

| ...So far, 3D driver work has proceeded almost entirely on the
| newest documented hardware that people could get. Going back and
| spending months optimizing software 3D rendering code so that it works
| as fast as software 2D code seems like a thankless task.

Jon's right about this:  If you can accelerate a given simple function
(blending, say) for a 2D driver, you can accelerate that same function
in a Mesa driver for a comparable amount of 

Re: State of Linux graphics

2005-08-31 Thread Ian Romanick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Allen Akin wrote:
 On Wed, Aug 31, 2005 at 02:06:54PM -0700, Keith Packard wrote:
 | 
 | ...So far, 3D driver work has proceeded almost entirely on the
 | newest documented hardware that people could get. Going back and
 | spending months optimizing software 3D rendering code so that it works
 | as fast as software 2D code seems like a thankless task.
 
 Jon's right about this:  If you can accelerate a given simple function
 (blending, say) for a 2D driver, you can accelerate that same function
 in a Mesa driver for a comparable amount of effort, and deliver a
 similar benefit to apps.  (More apps, in fact, since it helps
 OpenGL-based apps as well as Cairo-based apps.)

The difference is that there is a much larger number of state
combinations possible in OpenGL than in something stripped down for
just 2D.  That can make it more difficult to know where to spend the
time tuning.  I've spent a fair amount of time looking at Mesa's texture
blending code, so I know this to be true.

The real route forward is to dig deeper into run-time code generation.
There are a large number of possible combinations, but they all look
pretty similar.  This is ideal for run-time code gen.  The problem is
that writing correct, tuned assembly for this stuff takes a pretty
experience developer, and writing correct, tuned code generation
routines takes an even more experienced developer.  Experienced and more
experienced developers are, alas, in short supply.

BTW, Alan, when are you going to start writing code again? :)

 So long as people are encouraged by word and deed to spend their time on
 2D drivers, Mesa drivers will be further starved for resources and the
 belief that OpenGL has nothing to offer 2D apps will become
 self-fulfilling.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFDFnFQX1gOwKyEAw8RAgZsAJ9MoKf+JTX4OGrybrhD+i2axstONgCghwih
/Bln/u55IJb3BMWBwVTA3sk=
=k086
-END PGP SIGNATURE-
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-31 Thread Keith Packard
On Wed, 2005-08-31 at 18:58 -0700, Allen Akin wrote:
 On Wed, Aug 31, 2005 at 02:06:54PM -0700, Keith Packard wrote:
 | On Wed, 2005-08-31 at 13:06 -0700, Allen Akin wrote:
 |  ...
 | 
 | Right, the goal is to have only one driver for the hardware, whether an
 | X server for simple 2D only environments or a GL driver for 2D/3D
 | environments. ...
 
 I count two drivers there; I was hoping the goal was for one. :-)

Yeah, two systems, but (I hope) only one used for each card. So far, I'm
not sure of the value of attempting to provide a mostly-software GL
implementation in place of existing X drivers.

 |   ... I think the only questions here are about the road from
 | where we are to that final goal.
 
 Well there are other questions, including whether it's correct to
 partition the world into 2D only and 2D/3D environments.  There are
 many disadvantages and few advantages (that I can see) for doing so.

I continue to work on devices for which 3D isn't going to happen.  My
most recent window system runs on a machine with only 384K of memory,
and yet supports a reasonable facsimile of a linux desktop environment.

In the 'real world', we have linux machines continuing to move
down-market with a target price of $100. At this price point, it is
reasonable to look at what are now considered 'embedded' graphics
controllers with no acceleration other than simple copies and fills.

Again, the question is whether a mostly-software OpenGL implementation
can effectively compete against the simple X+Render graphics model for
basic 2D application operations, and whether there are people interested
in even trying to make this happen.

 |... However, at the
 | application level, GL is not a very friendly 2D application-level API.
 
 The point of OpenGL is to expose what the vast majority of current
 display hardware does well, and not a lot more.  So if a class of apps
 isn't happy with the functionality that OpenGL provides, it won't be
 happy with the functionality that any other low-level API provides.  The
 problem lies with the hardware.

Not currently; the OpenGL we have today doesn't provide for
component-level compositing or off-screen drawable objects. The former
is possible in much modern hardware, and may be exposed in GL through
pixel shaders, while the latter spent far too long mired in the ARB and
is only now on the radar for implementation in our environment.

Off-screen drawing is the dominant application paradigm in the 2D world,
so we can't function without it while component-level compositing
provides superior text presentation on LCD screens, which is an
obviously increasing segment of the market.

 Conversely, if the apps aren't taking advantage of the functionality
 OpenGL provides, they're not exploiting the opportunities the hardware
 offers.  Of course I'm not saying all apps *must* use all of OpenGL;
 simply that their developers should be aware of exactly what they're
 leaving on the table.  It can make the difference between an app that's
 run-of-the-mill and one that's outstanding.

Most 2D applications aren't all about the presentation on the screen;
right now, we're struggling to just get basic office functionality
provided to the user. The cairo effort is more about making applications
portable to different window systems and printing systems than it is
about bling, although the bling does have a strong pull for some
developers.

So, my motivation for moving to GL drivers is far more about providing
drivers for closed source hardware and reducing developer effort needed
to support new hardware than it is about making the desktop graphics
faster or more fancy.

 Friendliness is another matter, and it makes a ton of sense to package
 common functionality in an easier-to-use higher-level library that a lot
 of apps can share.  In this discussion my concern isn't with Cairo, but
 with the number and type of back-end APIs we (driver developers and
 library developers and application developers) have to support.

Right, again the goal is to have only one driver per video card. Right
now we're not there, and the result is that the GL drivers take a back
seat in most environments to the icky X drivers that are required to
provide simple 2D graphics. That's not a happy place to be, and we do
want to solve that as soon as possible.

 | ... GL provides
 | far more functionality than we need for 2D applications being designed
 | and implemented today...
 
 With the exception of lighting, it seems to me that pretty much all of
 that applies to today's 2D apps.  It's just a myth that there's far
 more functionality in OpenGL than 2D apps can use.  (Especially for
 OpenGL ES, which eliminates legacy cruft from full OpenGL.)

The bulk of 2D applications need to paint solid rectangles, display a
couple of images with a bit of scaling and draw some text. All of the
reset of the 3D pipeline is just 

Re: State of Linux graphics

2005-08-30 Thread Jon Smirl
Access has been restored. The URL is good again.

http://www.freedesktop.org/~jonsmirl/graphics.html

-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Jon Smirl
Before you shut my account off I made you this offer:

On 8/31/05, Jon Smirl <[EMAIL PROTECTED]> wrote:
> Quit being a pain and write a response to the article if you don't
> like it. Censorship is not the answer. Open debate in a public format
> is the correct response. If you want me to I'll add your reponse to
> the end of the article.

I will still include your response if you want to write one.

On 8/31/05, Daniel Stone <[EMAIL PROTECTED]> wrote:
> On Wed, 2005-08-31 at 00:50 -0400, Jon Smirl wrote:
> > On 8/30/05, Daniel Stone <[EMAIL PROTECTED]> wrote:
> > > 'As a whole, the X.org community barely has enough resources to build a
> > > single server. Splitting these resources over many paths only results in
> > > piles of half finished projects. I know developers prefer working on
> > > whatever interests them, but given the resources available to X.org,
> > > this approach will not yield a new server or even a fully-competitive
> > > desktop based on the old server in the near term. Maybe it is time for
> > > X.org to work out a roadmap for all to follow.'
> > >
> > > You lose.
> >
> > Daniel Stone, the administrator of freedesk.org, has just taken it
> > upon himself to censor my article on the state of the X server. His
> > lame excuse is that I have stopped working the core of Xegl. It
> > doesn't seem to matter that I contributed 1,000s of lines of code to
> > fd.o that I am continuing to do maintenance on. So much for this being
> > a free desktop.
> 
> Sigh.  As I explained in the long thread we had in private mail, I have
> done several cleanups now on inactive accounts and projects.  You are
> absolutely not the first, and will not be the last.  I have not done
> such sweeps for a while, because I have not had time.  The realisation
> that your account was doing nothing other than hosting an HTML page now
> that I have some amount of time to look at fd.o again was enough to spur
> me to start a cleanup, and indeed, I am in the process of pinging many
> other dormant contributors; many of which have merely stopped working on
> X and may return, rather than having posted long statements of
> resignation to the list.
> 
> And, as I explained, a simple statement of intent from you that you
> intend to resume active development will be enough to justify your
> account being renewed.
> 
> (Alternately, if another administrator re-enables your account, I will
> not stop this.  I'm not the sole admin, not by far ...)
> 
> Possibly impolitic and bad timing, sure.  But my intent was not to
> censor.
> 
> > Can some else provide a place for me to host the article?
> 
> Is the wiki insufficient?  It is currently hosting such insignificant
> articles as the 6.9/7.0 release plan, f.e. ...
> 
> 


-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Jon Smirl
On 8/31/05, Daniel Stone <[EMAIL PROTECTED]> wrote:
> On Wed, 2005-08-31 at 00:50 -0400, Jon Smirl wrote:
> > On 8/30/05, Daniel Stone <[EMAIL PROTECTED]> wrote:
> > > 'As a whole, the X.org community barely has enough resources to build a
> > > single server. Splitting these resources over many paths only results in
> > > piles of half finished projects. I know developers prefer working on
> > > whatever interests them, but given the resources available to X.org,
> > > this approach will not yield a new server or even a fully-competitive
> > > desktop based on the old server in the near term. Maybe it is time for
> > > X.org to work out a roadmap for all to follow.'
> > >
> > > You lose.
> >
> > Daniel Stone, the administrator of freedesk.org, has just taken it
> > upon himself to censor my article on the state of the X server. His
> > lame excuse is that I have stopped working the core of Xegl. It
> > doesn't seem to matter that I contributed 1,000s of lines of code to
> > fd.o that I am continuing to do maintenance on. So much for this being
> > a free desktop.
> 
> Sigh.  As I explained in the long thread we had in private mail, I have
> done several cleanups now on inactive accounts and projects.  You are
> absolutely not the first, and will not be the last.  I have not done
> such sweeps for a while, because I have not had time.  The realisation
> that your account was doing nothing other than hosting an HTML page now
> that I have some amount of time to look at fd.o again was enough to spur
> me to start a cleanup, and indeed, I am in the process of pinging many
> other dormant contributors; many of which have merely stopped working on
> X and may return, rather than having posted long statements of
> resignation to the list.
> 
> And, as I explained, a simple statement of intent from you that you
> intend to resume active development will be enough to justify your
> account being renewed.

I told you multiple times that I am doing bug fixes and maintenance on
1,000s of lines of contributed code. Is that not a valid reason to
have an account? So only people writing new code can have accounts? I
guess you will have to disable half of all the accounts on fd.o to
enforce that policy.

You censored the article. 

>From the fd.o account policy page:
http://www.freedesktop.org/wiki/AccountRequests

What the Project Leader does
  Review and approve the request for an account & access to your project.

I believe I am a member of five projects on fd.o and you are not the
Project Leader of any of them. I would like to see the request from
the Project Leaders for my account removal.


> 
> (Alternately, if another administrator re-enables your account, I will
> not stop this.  I'm not the sole admin, not by far ...)
> 
> Possibly impolitic and bad timing, sure.  But my intent was not to
> censor.
> 
> > Can some else provide a place for me to host the article?
> 
> Is the wiki insufficient?  It is currently hosting such insignificant
> articles as the 6.9/7.0 release plan, f.e. ...
> 
> 


-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Jon Smirl
On 8/30/05, Daniel Stone <[EMAIL PROTECTED]> wrote:
> On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
> > The article has been reviewed but if it still contains technical
> > errors please let me know. Opinions on the content are also
> > appreciated.
> 
> 'As a whole, the X.org community barely has enough resources to build a
> single server. Splitting these resources over many paths only results in
> piles of half finished projects. I know developers prefer working on
> whatever interests them, but given the resources available to X.org,
> this approach will not yield a new server or even a fully-competitive
> desktop based on the old server in the near term. Maybe it is time for
> X.org to work out a roadmap for all to follow.'
> 
> You lose.

Daniel Stone, the administrator of freedesk.org, has just taken it
upon himself to censor my article on the state of the X server. His
lame excuse is that I have stopped working the core of Xegl. It
doesn't seem to matter that I contributed 1,000s of lines of code to
fd.o that I am continuing to do maintenance on. So much for this being
a free desktop.

Can some else provide a place for me to host the article?

On 8/30/05, Daniel Stone <[EMAIL PROTECTED]> wrote:
>On Wed, 2005-08-31 at 00:37 -0400, Jon Smirl wrote:
>> Because I have written thousand of lines of code that are in the fd.o
>> repositories and I need access in order to do maintenance on them.

>Your account has been temporarily disabled in line with your assertion
>that you have stopped work on Xegl.  If you have small patches, I
>recommend submitting through Bugzilla.  If you intend to resume active
>development, please ping me and I can re-enable it.

-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Jon Smirl
On 8/30/05, Daniel Stone <[EMAIL PROTECTED]> wrote:
> On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
> > The article has been reviewed but if it still contains technical
> > errors please let me know. Opinions on the content are also
> > appreciated.
> 
> 'As a whole, the X.org community barely has enough resources to build a
> single server. Splitting these resources over many paths only results in
> piles of half finished projects. I know developers prefer working on
> whatever interests them, but given the resources available to X.org,
> this approach will not yield a new server or even a fully-competitive
> desktop based on the old server in the near term. Maybe it is time for
> X.org to work out a roadmap for all to follow.'
> 
> You lose.

I am not a member of the X.org board or any of it's committees. I have
no control over what path X.org may choose to follow. All I did was
make a proposal. Everyone else is free to make proposals too. X.org
may choose to endorse one or continue business as usual.

-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Daniel Stone
On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
> The article has been reviewed but if it still contains technical
> errors please let me know. Opinions on the content are also
> appreciated.

'As a whole, the X.org community barely has enough resources to build a
single server. Splitting these resources over many paths only results in
piles of half finished projects. I know developers prefer working on
whatever interests them, but given the resources available to X.org,
this approach will not yield a new server or even a fully-competitive
desktop based on the old server in the near term. Maybe it is time for
X.org to work out a roadmap for all to follow.'

You lose.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Dave Airlie
> 
> As the author of Xgl and glitz I'd like to comment on a few things.
> 
> >From the article:
> 
> > Xgl was designed as a near term transition solution. The Xgl model
> > was to transparently replace the drawing system of the existing
> > X server with a compatible one based on using OpenGL as a device
> > driver. Xgl maintained all of the existing X APIs as primary APIs.
> > No new X APIs were offered and none were deprecated.
> ..
> > But Xgl was a near term, transition design, by delaying demand for
> > Xgl the EXA bandaid removes much of the need for it.
> 
> I've always designed Xgl to be a long term solution. I'd like if
> whatever you or anyone else see as not long term with the design of Xgl
> could be clarified.

I sent this comment to Jon before he published:
"Xgl was never near term, maybe you thought it was but no-one else did, the
sheer amount of work to get it to support all the extensions the current X
server does would make it non-near term ..."

I believe he is the only person involved who considered it near term,
without realising quite how much work was needed...

Dave.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Jon Smirl
On 8/30/05, David Reveman <[EMAIL PROTECTED]> wrote:
> > Xgl was designed as a near term transition solution. The Xgl model
> > was to transparently replace the drawing system of the existing
> > X server with a compatible one based on using OpenGL as a device
> > driver. Xgl maintained all of the existing X APIs as primary APIs.
> > No new X APIs were offered and none were deprecated.
> ..
> > But Xgl was a near term, transition design, by delaying demand for
> > Xgl the EXA bandaid removes much of the need for it.
> 
> I've always designed Xgl to be a long term solution. I'd like if
> whatever you or anyone else see as not long term with the design of Xgl
> could be clarified.

Xgl doesn't run standalone, it needs either Xegl or Xglx. Xglx isn't
really interesting since you're running an X server inside of another
one. It's a good environment for software developers but I don't think
you would want to base a desktop distribution on it.

The leaves Xegl. If Xegl were to enter widespread use by the end of
2006 it would be the right solution. But I don't think it is going to
make it anywhere close to the end of 2006 since X11R7 and EXA are
going to be released in front of it. I suspect those two releases will
just be getting widespread by the end of 2006.

So we are looking at 2007. That means two more year's advances in
hardware and things like a NV 6800GT will be $40. In that timeframe
programmable hardware will be mainstream. We also have time to fix
some of the problem in the current server. As described at the end of
the paper a new server design would feature OpenGL as it's primary
API, xlib would still be supported but at a secondary status.

> We already had a new drawing API for X, the X Render extension. Xgl is
> the first server to fully accelerate X Render.

I think the EXA people will say they have the first server in
distribution that fully accelerates X Render.

> 
> > Linux is now guaranteed to be the last major desktop to implement a
> > desktop GUI that takes full advantage of the GPU.
> 
> I'm not certain of that.

I can't see any scenario where Linux can put together a full GPU based
desktop before MS/Apple. We aren't even going to be close, we are at
least a year behind. Even if we fix the server all of the desktop
components need time to adjust too.

> 
> > In general, the whole concept of programmable graphics hardware is
> > not addressed in APIs like xlib and Cairo. This is a very important
> > point. A major new GPU feature, programmability is simply not
> > accessible from the current X APIs. OpenGL exposes this
> > programmability via its shader language.
> 
> That's just because we haven't had the need to expose it yet. I don't
> see why this can't be exposed through the Render extension. The trickier
> part is to figure out how we should expose it through the cairo API but
> that's not an X server design problem.

It will be interesting to read other X developer's comments on
exposing programmable graphics via render.

-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread David Reveman
On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
> I've written an article that surveys the current State of Linux
> graphics and proposes a possible path forward. This is a long article
> containing a lot of detailed technical information as a guide to
> future developers. Skip over the detailed parts if they aren't
> relevant to your area of work.
> 
> http://www.freedesktop.org/~jonsmirl/graphics.html
> 
> Topics include the current X server, framebuffer, Xgl, graphics
> drivers, multiuser support, using the GPU, and a new server design.
> Hopefully it will help you fill in the pieces and build an overall
> picture of the graphics landscape.
> 
> The article has been reviewed but if it still contains technical
> errors please let me know. Opinions on the content are also
> appreciated.
> 

As the author of Xgl and glitz I'd like to comment on a few things.

>From the article:

> Xgl was designed as a near term transition solution. The Xgl model
> was to transparently replace the drawing system of the existing
> X server with a compatible one based on using OpenGL as a device
> driver. Xgl maintained all of the existing X APIs as primary APIs.
> No new X APIs were offered and none were deprecated.
..
> But Xgl was a near term, transition design, by delaying demand for
> Xgl the EXA bandaid removes much of the need for it.

I've always designed Xgl to be a long term solution. I'd like if
whatever you or anyone else see as not long term with the design of Xgl
could be clarified.

We already had a new drawing API for X, the X Render extension. Xgl is
the first server to fully accelerate X Render.


> Linux is now guaranteed to be the last major desktop to implement a
> desktop GUI that takes full advantage of the GPU. 

I'm not certain of that.


> In general, the whole concept of programmable graphics hardware is
> not addressed in APIs like xlib and Cairo. This is a very important
> point. A major new GPU feature, programmability is simply not
> accessible from the current X APIs. OpenGL exposes this
> programmability via its shader language.

That's just because we haven't had the need to expose it yet. I don't
see why this can't be exposed through the Render extension. The trickier
part is to figure out how we should expose it through the cairo API but
that's not an X server design problem.


-David

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread David Reveman
On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
 I've written an article that surveys the current State of Linux
 graphics and proposes a possible path forward. This is a long article
 containing a lot of detailed technical information as a guide to
 future developers. Skip over the detailed parts if they aren't
 relevant to your area of work.
 
 http://www.freedesktop.org/~jonsmirl/graphics.html
 
 Topics include the current X server, framebuffer, Xgl, graphics
 drivers, multiuser support, using the GPU, and a new server design.
 Hopefully it will help you fill in the pieces and build an overall
 picture of the graphics landscape.
 
 The article has been reviewed but if it still contains technical
 errors please let me know. Opinions on the content are also
 appreciated.
 

As the author of Xgl and glitz I'd like to comment on a few things.

From the article:

 Xgl was designed as a near term transition solution. The Xgl model
 was to transparently replace the drawing system of the existing
 X server with a compatible one based on using OpenGL as a device
 driver. Xgl maintained all of the existing X APIs as primary APIs.
 No new X APIs were offered and none were deprecated.
..
 But Xgl was a near term, transition design, by delaying demand for
 Xgl the EXA bandaid removes much of the need for it.

I've always designed Xgl to be a long term solution. I'd like if
whatever you or anyone else see as not long term with the design of Xgl
could be clarified.

We already had a new drawing API for X, the X Render extension. Xgl is
the first server to fully accelerate X Render.


 Linux is now guaranteed to be the last major desktop to implement a
 desktop GUI that takes full advantage of the GPU. 

I'm not certain of that.


 In general, the whole concept of programmable graphics hardware is
 not addressed in APIs like xlib and Cairo. This is a very important
 point. A major new GPU feature, programmability is simply not
 accessible from the current X APIs. OpenGL exposes this
 programmability via its shader language.

That's just because we haven't had the need to expose it yet. I don't
see why this can't be exposed through the Render extension. The trickier
part is to figure out how we should expose it through the cairo API but
that's not an X server design problem.


-David

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Jon Smirl
On 8/30/05, David Reveman [EMAIL PROTECTED] wrote:
  Xgl was designed as a near term transition solution. The Xgl model
  was to transparently replace the drawing system of the existing
  X server with a compatible one based on using OpenGL as a device
  driver. Xgl maintained all of the existing X APIs as primary APIs.
  No new X APIs were offered and none were deprecated.
 ..
  But Xgl was a near term, transition design, by delaying demand for
  Xgl the EXA bandaid removes much of the need for it.
 
 I've always designed Xgl to be a long term solution. I'd like if
 whatever you or anyone else see as not long term with the design of Xgl
 could be clarified.

Xgl doesn't run standalone, it needs either Xegl or Xglx. Xglx isn't
really interesting since you're running an X server inside of another
one. It's a good environment for software developers but I don't think
you would want to base a desktop distribution on it.

The leaves Xegl. If Xegl were to enter widespread use by the end of
2006 it would be the right solution. But I don't think it is going to
make it anywhere close to the end of 2006 since X11R7 and EXA are
going to be released in front of it. I suspect those two releases will
just be getting widespread by the end of 2006.

So we are looking at 2007. That means two more year's advances in
hardware and things like a NV 6800GT will be $40. In that timeframe
programmable hardware will be mainstream. We also have time to fix
some of the problem in the current server. As described at the end of
the paper a new server design would feature OpenGL as it's primary
API, xlib would still be supported but at a secondary status.

 We already had a new drawing API for X, the X Render extension. Xgl is
 the first server to fully accelerate X Render.

I think the EXA people will say they have the first server in
distribution that fully accelerates X Render.

 
  Linux is now guaranteed to be the last major desktop to implement a
  desktop GUI that takes full advantage of the GPU.
 
 I'm not certain of that.

I can't see any scenario where Linux can put together a full GPU based
desktop before MS/Apple. We aren't even going to be close, we are at
least a year behind. Even if we fix the server all of the desktop
components need time to adjust too.

 
  In general, the whole concept of programmable graphics hardware is
  not addressed in APIs like xlib and Cairo. This is a very important
  point. A major new GPU feature, programmability is simply not
  accessible from the current X APIs. OpenGL exposes this
  programmability via its shader language.
 
 That's just because we haven't had the need to expose it yet. I don't
 see why this can't be exposed through the Render extension. The trickier
 part is to figure out how we should expose it through the cairo API but
 that's not an X server design problem.

It will be interesting to read other X developer's comments on
exposing programmable graphics via render.

-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Dave Airlie
 
 As the author of Xgl and glitz I'd like to comment on a few things.
 
 From the article:
 
  Xgl was designed as a near term transition solution. The Xgl model
  was to transparently replace the drawing system of the existing
  X server with a compatible one based on using OpenGL as a device
  driver. Xgl maintained all of the existing X APIs as primary APIs.
  No new X APIs were offered and none were deprecated.
 ..
  But Xgl was a near term, transition design, by delaying demand for
  Xgl the EXA bandaid removes much of the need for it.
 
 I've always designed Xgl to be a long term solution. I'd like if
 whatever you or anyone else see as not long term with the design of Xgl
 could be clarified.

I sent this comment to Jon before he published:
Xgl was never near term, maybe you thought it was but no-one else did, the
sheer amount of work to get it to support all the extensions the current X
server does would make it non-near term ...

I believe he is the only person involved who considered it near term,
without realising quite how much work was needed...

Dave.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Daniel Stone
On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
 The article has been reviewed but if it still contains technical
 errors please let me know. Opinions on the content are also
 appreciated.

'As a whole, the X.org community barely has enough resources to build a
single server. Splitting these resources over many paths only results in
piles of half finished projects. I know developers prefer working on
whatever interests them, but given the resources available to X.org,
this approach will not yield a new server or even a fully-competitive
desktop based on the old server in the near term. Maybe it is time for
X.org to work out a roadmap for all to follow.'

You lose.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Jon Smirl
On 8/30/05, Daniel Stone [EMAIL PROTECTED] wrote:
 On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
  The article has been reviewed but if it still contains technical
  errors please let me know. Opinions on the content are also
  appreciated.
 
 'As a whole, the X.org community barely has enough resources to build a
 single server. Splitting these resources over many paths only results in
 piles of half finished projects. I know developers prefer working on
 whatever interests them, but given the resources available to X.org,
 this approach will not yield a new server or even a fully-competitive
 desktop based on the old server in the near term. Maybe it is time for
 X.org to work out a roadmap for all to follow.'
 
 You lose.

I am not a member of the X.org board or any of it's committees. I have
no control over what path X.org may choose to follow. All I did was
make a proposal. Everyone else is free to make proposals too. X.org
may choose to endorse one or continue business as usual.

-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Jon Smirl
On 8/30/05, Daniel Stone [EMAIL PROTECTED] wrote:
 On Tue, 2005-08-30 at 12:03 -0400, Jon Smirl wrote:
  The article has been reviewed but if it still contains technical
  errors please let me know. Opinions on the content are also
  appreciated.
 
 'As a whole, the X.org community barely has enough resources to build a
 single server. Splitting these resources over many paths only results in
 piles of half finished projects. I know developers prefer working on
 whatever interests them, but given the resources available to X.org,
 this approach will not yield a new server or even a fully-competitive
 desktop based on the old server in the near term. Maybe it is time for
 X.org to work out a roadmap for all to follow.'
 
 You lose.

Daniel Stone, the administrator of freedesk.org, has just taken it
upon himself to censor my article on the state of the X server. His
lame excuse is that I have stopped working the core of Xegl. It
doesn't seem to matter that I contributed 1,000s of lines of code to
fd.o that I am continuing to do maintenance on. So much for this being
a free desktop.

Can some else provide a place for me to host the article?

On 8/30/05, Daniel Stone [EMAIL PROTECTED] wrote:
On Wed, 2005-08-31 at 00:37 -0400, Jon Smirl wrote:
 Because I have written thousand of lines of code that are in the fd.o
 repositories and I need access in order to do maintenance on them.

Your account has been temporarily disabled in line with your assertion
that you have stopped work on Xegl.  If you have small patches, I
recommend submitting through Bugzilla.  If you intend to resume active
development, please ping me and I can re-enable it.

-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Jon Smirl
On 8/31/05, Daniel Stone [EMAIL PROTECTED] wrote:
 On Wed, 2005-08-31 at 00:50 -0400, Jon Smirl wrote:
  On 8/30/05, Daniel Stone [EMAIL PROTECTED] wrote:
   'As a whole, the X.org community barely has enough resources to build a
   single server. Splitting these resources over many paths only results in
   piles of half finished projects. I know developers prefer working on
   whatever interests them, but given the resources available to X.org,
   this approach will not yield a new server or even a fully-competitive
   desktop based on the old server in the near term. Maybe it is time for
   X.org to work out a roadmap for all to follow.'
  
   You lose.
 
  Daniel Stone, the administrator of freedesk.org, has just taken it
  upon himself to censor my article on the state of the X server. His
  lame excuse is that I have stopped working the core of Xegl. It
  doesn't seem to matter that I contributed 1,000s of lines of code to
  fd.o that I am continuing to do maintenance on. So much for this being
  a free desktop.
 
 Sigh.  As I explained in the long thread we had in private mail, I have
 done several cleanups now on inactive accounts and projects.  You are
 absolutely not the first, and will not be the last.  I have not done
 such sweeps for a while, because I have not had time.  The realisation
 that your account was doing nothing other than hosting an HTML page now
 that I have some amount of time to look at fd.o again was enough to spur
 me to start a cleanup, and indeed, I am in the process of pinging many
 other dormant contributors; many of which have merely stopped working on
 X and may return, rather than having posted long statements of
 resignation to the list.
 
 And, as I explained, a simple statement of intent from you that you
 intend to resume active development will be enough to justify your
 account being renewed.

I told you multiple times that I am doing bug fixes and maintenance on
1,000s of lines of contributed code. Is that not a valid reason to
have an account? So only people writing new code can have accounts? I
guess you will have to disable half of all the accounts on fd.o to
enforce that policy.

You censored the article. 

From the fd.o account policy page:
http://www.freedesktop.org/wiki/AccountRequests

What the Project Leader does
  Review and approve the request for an account  access to your project.

I believe I am a member of five projects on fd.o and you are not the
Project Leader of any of them. I would like to see the request from
the Project Leaders for my account removal.


 
 (Alternately, if another administrator re-enables your account, I will
 not stop this.  I'm not the sole admin, not by far ...)
 
 Possibly impolitic and bad timing, sure.  But my intent was not to
 censor.
 
  Can some else provide a place for me to host the article?
 
 Is the wiki insufficient?  It is currently hosting such insignificant
 articles as the 6.9/7.0 release plan, f.e. ...
 
 


-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Jon Smirl
Before you shut my account off I made you this offer:

On 8/31/05, Jon Smirl [EMAIL PROTECTED] wrote:
 Quit being a pain and write a response to the article if you don't
 like it. Censorship is not the answer. Open debate in a public format
 is the correct response. If you want me to I'll add your reponse to
 the end of the article.

I will still include your response if you want to write one.

On 8/31/05, Daniel Stone [EMAIL PROTECTED] wrote:
 On Wed, 2005-08-31 at 00:50 -0400, Jon Smirl wrote:
  On 8/30/05, Daniel Stone [EMAIL PROTECTED] wrote:
   'As a whole, the X.org community barely has enough resources to build a
   single server. Splitting these resources over many paths only results in
   piles of half finished projects. I know developers prefer working on
   whatever interests them, but given the resources available to X.org,
   this approach will not yield a new server or even a fully-competitive
   desktop based on the old server in the near term. Maybe it is time for
   X.org to work out a roadmap for all to follow.'
  
   You lose.
 
  Daniel Stone, the administrator of freedesk.org, has just taken it
  upon himself to censor my article on the state of the X server. His
  lame excuse is that I have stopped working the core of Xegl. It
  doesn't seem to matter that I contributed 1,000s of lines of code to
  fd.o that I am continuing to do maintenance on. So much for this being
  a free desktop.
 
 Sigh.  As I explained in the long thread we had in private mail, I have
 done several cleanups now on inactive accounts and projects.  You are
 absolutely not the first, and will not be the last.  I have not done
 such sweeps for a while, because I have not had time.  The realisation
 that your account was doing nothing other than hosting an HTML page now
 that I have some amount of time to look at fd.o again was enough to spur
 me to start a cleanup, and indeed, I am in the process of pinging many
 other dormant contributors; many of which have merely stopped working on
 X and may return, rather than having posted long statements of
 resignation to the list.
 
 And, as I explained, a simple statement of intent from you that you
 intend to resume active development will be enough to justify your
 account being renewed.
 
 (Alternately, if another administrator re-enables your account, I will
 not stop this.  I'm not the sole admin, not by far ...)
 
 Possibly impolitic and bad timing, sure.  But my intent was not to
 censor.
 
  Can some else provide a place for me to host the article?
 
 Is the wiki insufficient?  It is currently hosting such insignificant
 articles as the 6.9/7.0 release plan, f.e. ...
 
 


-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: State of Linux graphics

2005-08-30 Thread Jon Smirl
Access has been restored. The URL is good again.

http://www.freedesktop.org/~jonsmirl/graphics.html

-- 
Jon Smirl
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/