Bob La Quey wrote:
Other than spending a ton of money what is worthy of
research here? Are there really any architectural or
serious theoretical issues about how to do this? I
suppose this will impress the Bigfoots and thus is
important for that reason.
That probably doesn't qualify as a "ton" of money anymore. It's
probably only about $100,000 (and dropping) nowadays. Anyway ...
Leaving aside the problem of simply how to *move* that much data
effectively (that is somewhere around 1GiB/frame of data which is about
100GiB/sec *every second* of data) ...
There is quite a bit of architectural problem with rendering that many
pixels. It's a reasonably annoying parallel processing problem to solve.
The issue is how to split up the work.
Vertex transformations are easy to parallelize. That's straightforward.
The issue, however, is how to composite all of those pixels. You don't
know which triangle fragments interact with which other triangle
fragments until *after* you've done all the calculations. So, you need
to come up with algorithms which partition the problem without killing
your pipeline performance.
On the surface this just looks to me like "Gee whiz"
science of very little real importance. Are they not
simply building a "white elephant" that will be completely
obsolete in a few years?
Well, 220 Megapixels is about the same order of magnitude as what you
would get if you rendered a standard sheet of paper at 1200dpi or a
physical desktop at 300dpi resolution. So, it's not an unreasonable
question to ask.
-a
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list