On Thu, 4 Apr 2002, Raystonn wrote:

> The games perform overdraw, sure.  But I am talking about at the pixel
> level.  A scene-capture algorithm performs 0 overdraw, regardless of what
> the game sends it.

That's not true.  I've designed and built machines like this and I know.

You still need overdraw when:

  * you are antialiasing a polygon edge.
  * you are rendering translucent surfaces.
  * you need more textures on a polygon than your
    hardware can render in a single pass.
  * you have to read pixels back from the frame buffer and then
    continue rendering polygons.
  * polygons get smaller than a pixel in width or height.
  * you need to draw more polygons than your hardware has
    room to store.

...I'm sure there are other reasons too.

>  This reduces fillrate needs greatly.

It reduces it (in my experience) by a factor of between 2 and 4 depending
on the nature of the scene.  You can easily invent scenes that show much
more benefit - but they tend to be contrived cases that don't crop up
much in real applications because of things like portal culling.

> > Also, in order to use scene capture, you are reliant on the underlying
> > graphics API to be supportive of this technique.  Neither OpenGL nor
> > Direct3D are terribly helpful.
>
> Kyro-based 'scene-capture' video cards support both Direct3D and OpenGL.

They do - but they perform extremely poorly for OpenGL programs that
do anything much more complicated than just throwing a pile of polygons
at the display.  As soon as you get into reading back pixels for any
reason, any scene-capture system has to render the polygons it has
before the program can access the pixels in the frame buffer.

> > > Everything starts out in hardware and eventually moves to software.
> >
> > That's odd - I see the reverse happening.  First we had software
>
> The move from hardware to software is an industry-wide pattern for all
> technology.  It saves money.  3D video cards have been implementing new
> technologies that were never used in software before.  Once the main
> processor is able to handle these things, they will be moved into software.
> This is just a fact of life in the computing industry.  Take a look at what
> they did with "Winmodems".  They removed hardware and wrote drivers to
> perform the tasks.  The same thing will eventually happen in the 3D card
> industry.

That's not quite a fair comparison.

Modems can be moved into software because there is no need for them *EVER*
to get any faster.  All modern modems can operate faster than any standard
telephone line and are in essence *perfect* devices that cannot be improved
upon in any way.  Hence a hardware modem that would run MUCH faster than the
CPU would be easy to build - but we don't because it's just not useful.
That artificial limit on the speed of a modem is the only thing that allows
software to catch up with hardware and make it obsolete.

We might expect sound cards to go the same way - once they get fast enough
to produce any concievable audio experience that the human perceptual
system can comprehend - then there is a chance for software audio to catch
up.  That hasn't happened yet - which is something I find rather suprising.

But that's in no way analogous to the graphics situation where we'll continue
to need more performance until the graphics you can draw are completely
photo-realistic - indistinguishable from the real world - and operate over
the complete visual field at eye-limiting resolution.  We are (in my
estimation) still at least three orders of magnitude in performance
away from that pixel fill rate and far from where we need to be in
terms of realism and polygon rates.

----
Steve Baker                      (817)619-2657 (Vox/Vox-Mail)
L3Com/Link Simulation & Training (817)619-2466 (Fax)
Work: [EMAIL PROTECTED]           http://www.link.com
Home: [EMAIL PROTECTED]       http://www.sjbaker.org


_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to