Gack!  I'm *so* sick of hearing this argument...

On Tue, 2 Apr 2002, Raystonn wrote:

> I am definately all for increasing the performance of the software renderer.

Yes.

> Eventually the main system processor will be fast enough to perform all of
> this without the need for a third party graphics card.

I very much doubt this will happen within the lifetime of silicon chip
technology.  Maybe with nanotech, biological or quantum computing - but
probably not even then.

>  The only thing video
> cards have today that is really better than the main processor is massive
> amounts of memory bandwidth.

That is far from the truth - they have internal pipelining
and parallelism.  Their use of silicon can be optimised to balance
the performance of just one single algorithm.  You can never do that
for a machine that also has to run an OS, word process and run
spreadsheets.

>  Since memory bandwidth is increasing rapidly,...

It is?!?  Let's look at the facts:

Since 1989, CPU speed has grown by a factor of 70.  Over the same
period the memory bus has increased by a factor of maybe 6 or so.

Caching can go some way to hiding that - but not for things like
graphics that need massive frame buffers and huge texture maps.
Caching also makes parallelism difficult and rendering algorithms
are highly parallelisable.  PC's are *horribly* memory-bound.

> I foresee the need for video cards lessening in the future.

Whilst memory bandwidth inside the main PC is increasing, it's doing
so very slowly - and all the tricks it uses to get that speedup are equally
applicable to the graphics hardware (things like DDR for example).

On the other hand, the graphics card can use heavily pipelined
operations to guarantee that the memory bandwidth is 100% utilised
- and can use an arbitarily large amount of parallelism to improve
throughput.  The main CPU can't do that because it's memory access
patterns are not regular and it has little idea where the next byte
has to be read from until it's too late.

Also, the instruction set of the main CPU isn't optimised for the
rendering task - where that is the ONLY thing the graphics chip
has to do.  The main CPU has all this legacy crap to deal with because
it's expected to run programs that were written 20 years ago.
Every generation of graphics chip can have a totally redesigned
internal architecture that exactly fits the profile of today's
RAM and silicon speeds.

You only have to look at the gap you are trying to bridge - a
modern graphics card is *easily* 100 times faster at rendering
sophisticated pixels (with pixel shaders, multiple textures and
antialiasing) than the CPU.

> A properly
> implemented and optimized software version of a tile-based "scene-capture"
> renderer much like that used in Kyro could perform as well as the latest
> video cards in a year or two.  This is what I am dabbling with at the
> moment.

I await this with interest - but 'scene capture' systems tend to be
unusable with modern graphics API's...they can't run either OpenGL
or Direct3D efficiently for arbitary input.  If there were to be
some change in consumer needs that would result in 'scene capture'
being a usable technique - then the graphics cards can easily take
that on board and will *STILL* beat the heck out of doing it in
the CPU.  Scene capture is also only feasible if the number of
polygons being rendered is small and bounded - the trends are
for modern graphics software to generate VAST numbers of polygons
on-the-fly precisely so they don't have to be stored in slow old
memory.

Everything that is speeding up the main CPU is also speeding up
the graphics processor - faster silicon, faster busses and faster
RAM all help the graphics just as much as they help the CPU.

However, increasing the number of transistors you can have on
a chip doesn't help the CPU out very much.  Their instruction
sets are not getting more complex in proportion to the increase
in silicon area - and their ability to make use of more complex
instructions is already limited by the brain power of compiler
writers.  Most of the speedup in modern CPU's is coming from
physically shorter distances for signals to travel and faster
clocks - all of the extra gates typically end up increasing the
size of the on-chip cache which has marginal benefits to graphics
algorithms.

In contrast to that, a graphics chip designer can just double
the number of pixel processors or something and get an almost
linear increase in performance with chip area with relatively
little design effort and no software changes.

If you doubt this, look at the progress over the last 5 or 6
years.  In late 1996 the Voodoo-1 had a 50Mpixel/sec fill rate.
In 2002 GeForce-4 has a fill rate of 4.8 Billion (antialiased)
pixels/sec - it's 100 times faster.  Over the same period,
your 1996 233MHz CPU has gone up to a 2GHz machine ...a mere
10x speedup.  The graphics cards are also gaining features.
Over that same period, they added - windowing, hardware T&L,
antialiasing, multitexture, programmability, you name it.
Meanwhile the CPU's have added just a modest amount of MMX/3Dnow
type functionality...almost none of which is actually *used*
because our compilers don't know how to generate those new
instructions in compiling generalised C/C++ code.

CONCLUSION.
~~~~~~~~~~~
There is no sign whatever that CPU's are "catching up" with
graphics cards - and no logical reason why they ever will.

I'm in favor of speeding up Mesa's software renderer because
there are cases where it's still useful (eg in PDA's - where
we are only *just* beginning to see custom graphics processors).

----
Steve Baker                      (817)619-2657 (Vox/Vox-Mail)
L3Com/Link Simulation & Training (817)619-2466 (Fax)
Work: [EMAIL PROTECTED]           http://www.link.com
Home: [EMAIL PROTECTED]       http://www.sjbaker.org


_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to