Attila Kinali wrote:
On Wed, 18 Jul 2007 18:34:27 -0700
James Richard Tyrer <[EMAIL PROTECTED]> wrote:
Attila Kinali wrote:
On Tue, 17 Jul 2007 11:37:08 -0700 James Richard Tyrer
The scaler isn't necessary for us as OGA will have one anyways (for
the 3D stuff).
I thought that the hardware scaler would only be used for video.
Doesn't make sense. The video is generated at the RAMDAC,
there we need already a scaled version. But as we have scalers
anyways for 3D we can recycle those (just map 2D data into
a 3D object and scale it)-
Duh? You know video like the output from the MPEG decoder. That needs
YUV to RGB and then it needs to be scaled to fit whatever size window it
is being displayed in.
The color space YUV->RGB conversion is rather simple to implement.
Yes, simple to implement, but computationally expensive, and therefore,
costly to implement -- large amount of chip real estate needed.
Compared to most 3D operation it's even computationally cheap.
It's just 9 multiplications and 6 additions in the generall case
The general case is to multiply the RGB 3 vector by a 3x3 matrix and add
a 3 vector constant. Yes, that is 9 multiplications but if we use a 2
input adder, it is going to take more additions. The dot product takes
6 and you need 3 more for the constant.
and if we limit us to one YUV->RGB formula than it's 7 multiplications
and 7 additions. Additionally, this is something that can be easily
pipelined so we should be able to spit out one converted sample
per clock cycle per pipline.
The only question is the bandwidth of the video data. That will
determine how parallel the operations need to be, and how much hardware
is needed.
Upsampling from 4:2:2 to 4:4:4 is nothing difficult either (simple
FIR filter operating on scanlines), but the upsampling from 4:2:0 to
4:2:2 is (upsampling in vertical direction, guess why they don't do
it).
Perhaps it is because it isn't needed. Which decoders output 4:2:0?
Uhmm.. MPEG1, MPEG2, MPEG4, H.264,..... 4:2:0 is the defacto
standard for video subsampling. The only case i know that regularly
uses 4:2:2 are video cameras because it is a lot easier to implement.
I think that I am missing something here. Why does it matter if 4:2:0
or 4:2:2 chroma sampling method is used since the decoder is going to
output YUV data for each 2 pixels for each line, isn't it? If so, then
the decoder converts it from 4:2:0 to 4:2:2.
Deinterlacing is something that cannot be really done without
some information from video source (or some assumptions on the video
output device) and if only a simple implementation is used (which i
assume), then the quality will suck.
Are we talking about motion compensating deinterlacing (for sources
originally shot with an interlaced video camera and recorded on tape)?
No, we are talking about plain deinterlacing without any motion
compensation. The one needed if you display interlaced content
produced for TV consumption on a progressive display.
Plain deinterlacing is just writes to and reads from memory. I don't
know how that could "suck"
Or, perhaps you are referring to line doubling by vertical interpolation.
or just cine deinterlacing where all that is needed is to rearrange the
fields from 3:2 pull down so that you always display an odd and an even
field from the same film frame together?
That's not deinterlacing but (inverse) telecine. It has nothing to
do with interlacing beside the fact that it has the same combing
effect and results from the same assumption of a interleaved display
working at a specific frame rate.
If you have interlaced video data, it needs to be deinterlaced (not up
sampled to double the lines per field).
As such, inverse telecine is quite simple to implement (skip all
inserted frames and spread the remaining ones equally over the
time scale) while deinterlacing requires more work.
It is the same thing write the fields to memory and then read them out
progressively except that inverse telecine does it in a different order.
Or, are you referring to line doubling by vertical interpolation?
As a side note, applying deinterlacing to telecined content
or inverse telecine to deinterlaced content has quite bad
effects on the image quality.
If you are referring to line doubling by vertical interpolation, then
yes you want to avoid doing that to telecine. But there is no point
since you only need to weave the proper fields.
With a computer monitor, if you can run at 72 f/s this gets much easier.
All in all i would say we are better off to implement the stuff done
by this chip in OGA directly.
It is always better to implement stuff directly, unless it costs more to
reinvent the wheel.
We don't have to reinvent the wheel, it's already out there.
We just have to implement it again.
It is just an English expression. Reimplementing something is
'reinventing the wheel'. The question is whether it is cheaper to make
your own wheel or buy one that is ready made.
JRT
_______________________________________________
Open-hardware mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-hardware