Good morning,

On Thu, 19 Jul 2007 17:58:40 -0700
James Richard Tyrer <[EMAIL PROTECTED]> wrote:
> Attila Kinali wrote:

> > Yes, but we have to integrate the video we output in the output
> > of the other stuff of the graphics card. Thus it's easier to
> > use the frame buffer of OGA for the output. And if we use
> > that, we can also use the scaler in OGA.
> 
> Yes, and we do that by scaling the output of the decoder and writing it 
> somewhere in frame buffer memory.  This is the only way that I know to 
> do it -- it would be the same if we use existing chips or do our own.

Yes, but that is normaly performed at blitting time and thus
doesn't need any addtional memory read/write. And as it is basically
the same operation as displaying a texture on an object, we can
use the texture renderer (fragment/pixel shaders) from the 3D engine
to do the work for us (blitting a video is a simplifying corner
case of texture rendering, where the texture is parallel to the camera).

> > What decoders are you talking about? Any example of an video
> > decoder that accepts 4:2:0 encoded video and outputs 4:2:2 decoded video?
> 
> I was speaking of a decoder in the abstract sense.  A black box that 
> accepts encoded video and outputs a digital signal that can be directly 
> displayed -- perhaps after DA conversion.

A decoder in an abstract sense does only one thing: it decodes.
It doesn't guess what kind of format the display device expects.
It's the layer between the decoder and the display device that
does the conversion. Otherwise all decoders would need to know
exactly what acceptable colorspace and subsampling formats
the display device might use.
 
> > I only know software video decoders and those don't 
> > do a 4:2:0 to 4:2:2 up conversion unless explicitly
> > asked to do so.
> > 
> AFAIK, the decoder must do this since there is no way to *directly* 
> display 4:2:0 chroma sampled video.

There is no way to *directly* display YUV 4:2:2 either. Displays
mostly accept RGB (4:4:4) only.

>  I don't know how 4:2:0 digital video data would be be formatted, 

So, if you don't know it, how about looking it up?
www.fourcc.org is a good source for colour format specifications.
Hint, the most commonly used one for video is YV12.

> but you would need to remember more 
> than one pixel to display it.  IIUC, the display would have to remember 
> a whole line.

It's not the display but the upsampling system that does that.
And yes, it has to have access to more than one pixel, but not
a complete line.

> I found this link interesting since we are going to be designing this:
> 
> http://www.hometheaterhifi.com/volume_8_2/dvd-benchmark-special-report-chroma-bug-4-2001.html

That's a non-issue. This only happens if stupid people don't
read the specs. Since at least MPEG1 there is a definition of
subsampling for interlaced and non-interlaced video and how
it should be upsampled.


> > The algorithm you apply between the read and the write.
> > 
> >> Or, perhaps you are referring to line doubling by vertical interpolation.
> > 
> > NO! that's the most awfull way to deinterlace.
> 
> IIUC, you are suggesting running one odd and one even field through a 2D 
> FIR filter (actually a convolution) to "deinterlace" 

No. FIR filtering (usualy called linear/cubic blend in this context)
is one method. There are others that use the optical flow or try to emulate
the scaning and afterglow behaviour of interlaced CRT devices (aka TVs).

> I first note that 
> what you have just said is awful is one specific case of this general 
> method.  I presume that you would do odd frame 1 & even frame 1, then 
> even frame 1 & odd frame 2, and then odd frame 2 & even frame 2, etc. 

s/frame/field/

That only works if, and only if, the display device is 100% synchronized
to the video stream. And you need to decrease the luminance of the
field that you show a second time to half or a quater of its original
luminance to get the right effect, otherwise you'll get horrible
combing. That's BTW basically the "emulation" algorithm i mentioned
above.


> The simplest algorithm is to use vertical interpolation on each frame 
> obtain the missing scan lines and then interpolation between the line 
> doubled images (temporal interpolation) to try to approximate a 
> progressive frame halfway in time between the two fields.  The simplest 
> in both cases is linear interpolation.
> 
> Sounds nice doesn't it but it is a 3x1 convolution filter:
> 
> PixelOut(N, M) =
>          PixelIn(N, M - 1)/4 + PixelIn(N, M)/2 + PixelIn(N, M + 1)/4

That's called linear blend.

> I presume that this simple filter sucks.

Due to its simplicity it is the most used algorithm. It performs
resonably well, but yes, artifacts are visible.

Luckily, the unshariping effect happens only during motion
and that is hardly visible to the human eye. At least a lot
less than combing effects.

> OTOH, is this really a concern for content originally shot on film -- no 
> actual deinterlacing needed there.

Huh? That's illogical. If no deinterlacing is needed, then it is of _no_
concern.

> > Of course. But interlacing has nothing to do with telecining.
> 
> FIR temporal filtering has nothing to do with telecining, but telecining 
>   produces frames (every third frame) where the odd and even fields are 
> not from the same film frame.  So, it is related to deinterlacing 
> without filtering since you have to combine odd and even fields to 
> create a whole frame.

No it's not related. with telecine you can just drop those
frames that have been combined from two original frames and
you get _exactly_ the original video.

With interlacing it is _not_ possible to get to the original
progressive video, because there has never been a progressiv
video (if it was shot interlaced, like it is done for most
of the TV productions) or has been interlaced afterwards
without knowing the exact algorigthm they used.

> I find the illustrations here to be better:
> 
> http://www.theprojectorpros.com/learn.php?s=learn&p=theater_pulldown_deinterlacing

Please be aware that they mix up telecine and interlacing
here because they talk about TV devices. With video, telecined
content consists of progressive frames which come from the original
video and the interlaced frames which come from telecining.

Thus, telecine and interlacing are two really distinct
methods in video systems.
 
> > Interlacing is needed for all TV based systems because they operate
> > in a (surprise!) interlaced mode, showing half frames (fields) at
> > double rate.
> > 
> Actually, my new TV displays 480p and 720p and these signals are 
> broadcast along with 1080p (and the old NTSC 480i analog).
> 
> The 720p advocates say that progressive is better, but I find serious 
> motion artifacts that I don't see with 1080i.  Go figure.

Be carefull. Motion artifacts alos depend on the exact behaviour
of the display device. Eg LCDs have more motion artifacts than
CRTs.

> IIUC, the reason that we need to worry about this is that we need to 
> show interlaced video along with our progressive video.

Yes, we need to wory about this, but not because we display
progressive and interlaced video together, but because
computer display devices are all progressive.

> > Computer monitors don't run at 72Hz. They run at arbitrary frequencies.
> > And this frequency cannot be changed at will by the graphics card,
> > just because the video it's showing sugests a different refresh rate.
> > 
> Most CRT monitors currently (and for the past 10 years) sold for 
> Personal Computers are variable frequency in both horizontal and 
> vertical.

Yes, but the user sets the monitor and graphic card to work
at _one_specific_ frequency. And we are not allowed to change
that without being asked to do so. Thus for all practical proposes
in video coding we have a fixed frequency display.

> OTOH, I haven't looked into LCD monitors that much so I don't know if 
> they only accept certain fixed frequencies or if they accept variable 
> frequency sync input over a range like most current CRT monitors do.

LCDs are usualy 60Hz fixed frequency.

Over all, i suggest you to read up a bit about how video decoders
work and what the issues are on displaying videos on interlaced
and progressive displays. This discussion is getting too tedious
for my taste.

doom9 for all its geekyness and inacurancy might be a good place
to start. MPlayer and FFmpeg contain quite a bit of documentation too.
The MPEG and ITU video standards and their related books also explain
it to a certain extend.

                                Attila Kinali
-- 
Praised are the Fountains of Shelieth, the silver harp of the waters,
But blest in my name forever this stream that stanched my thirst!
                         -- Deed of Morred
_______________________________________________
Open-hardware mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-hardware

Reply via email to