Re: New Video Decode and Presentation API

2009-02-27 Thread Andy Ritger

Sorry for the slow response, Torgeir.

 Neither, I'm trying to not use deinterlacing in the graphics card.

 Is it technically (theoretically) possible to rearrange the order of merging 
 the fields and scaling?

I think if you want to scale, then you really need to deinterlace, first.

The deinterlacer in NVIDIA's VDPAU implementation was improved in the
latest 180.35 beta:

 http://www.nvnews.net/vbulletin/showthread.php?t=128939

If you're having problems with the deinterlacer, and that is your
motivation for trying to avoid deinterlacing, then it might be worth
evaluating the deinterlacer updates.

I hope that helps,
- Andy


On Wed, 18 Feb 2009, Torgeir Veimo wrote:


 On 17 Feb 2009, at 18:05, Andy Ritger wrote:

 Are the even and odd frames scaled individually before applied to the
 resulting surface, or are they applied to the screen, then scaled?

 I believe the latter: the content is deinterlaced at the resolution
 of the content, and then the progressive frame is scaled to the size
 specified by the video player.

 The documentation here may be helpful (particularly the data flow diagram):

   ftp://download.nvidia.com/XFree86/vdpau/doxygen/html/index.html

 Deinterlacing is done in the VdpVideoMixer, and I expect scaling is done
 as part of constructing the VdpOutputSurface.

 It may be worth looking at what deinterlacing configuration your video
 player is using.  E.g., is it requesting temporal or temporal-spatial
 deinterlacing?

 Neither, I'm trying to not use deinterlacing in the graphics card.

 Is it technically (theoretically) possible to rearrange the order of merging 
 the fields and scaling?

 --
 Torgeir Veimo
 torg...@pobox.commailto:torg...@pobox.com





___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: New Video Decode and Presentation API

2009-02-27 Thread Xavier Bestel
Le vendredi 27 février 2009 à 15:21 -0800, Andy Ritger a écrit :
  Neither, I'm trying to not use deinterlacing in the graphics card.
 
  Is it technically (theoretically) possible to rearrange the order of 
  merging the fields and scaling?
 
 I think if you want to scale, then you really need to deinterlace, first.

Not if you want to scale to an interlaced display.

Xav


___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: New Video Decode and Presentation API

2009-02-18 Thread Torgeir Veimo


On 17 Feb 2009, at 18:05, Andy Ritger wrote:


Are the even and odd frames scaled individually before applied to the
resulting surface, or are they applied to the screen, then scaled?


I believe the latter: the content is deinterlaced at the resolution
of the content, and then the progressive frame is scaled to the size
specified by the video player.

The documentation here may be helpful (particularly the data flow  
diagram):


   ftp://download.nvidia.com/XFree86/vdpau/doxygen/html/index.html

Deinterlacing is done in the VdpVideoMixer, and I expect scaling is  
done

as part of constructing the VdpOutputSurface.

It may be worth looking at what deinterlacing configuration your video
player is using.  E.g., is it requesting temporal or temporal-spatial
deinterlacing?



Neither, I'm trying to not use deinterlacing in the graphics card.

Is it technically (theoretically) possible to rearrange the order of  
merging the fields and scaling?


--
Torgeir Veimo
torg...@pobox.com




___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg

Re: New Video Decode and Presentation API

2009-02-16 Thread Torgeir Veimo

On 20 Nov 2008, at 09:31, Andy Ritger wrote:

 On Tue, 18 Nov 2008, Torgeir Veimo wrote:

 Is field parity observed when outputting interlaced material? I  
 think it's equally important to have good support for baseline  
 mpeg2 in addition to other codecs, and this would imply that  
 interlaced, field parity correct mpeg2 output on standard s-video /  
 rgb should be fully working.

 If the application takes advantage of VDPAU's de-interlacing (the  
 current
 mplayer patches to use VDPAU do _not_ yet enable VDPAU's  
 deinterlacing), then the end result of VDPAU's presentation queue is  
 a progressive frame.

 If the application doesn't enable de-interlacing, NVIDIA's VDPAU
 implementation will currently copy the weaved frame to the  
 progressive
 surface, and whether it will come out correctly will depend whether  
 the
 window's offset from the start of the screen is odd or even.


Consider the following scenario; the screen is set to the dimensions  
of a PAL picture, 720x576, and the source material is interlaced, but  
with a different native pixel size than the screen, eg 540x480.

Are the even and odd frames scaled individually before applied to the  
resulting surface, or are they applied to the screen, then scaled?

If the latter is the case, I cannot see how this setup will be able to  
provide proper interlaced output without doing any intermedia  
deinterlace. Even frames will bleed into odd frames due to the  
scaling, causing a double (or ghost) picture effect. This seems  
consistent with what I'm experimenting trying out a vdpau setting with  
interlaced mpeg2 TV material.

-- 
Torgeir Veimo
torg...@pobox.com




___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: New Video Decode and Presentation API

2008-11-19 Thread Andy Ritger

Hello, Torgeir.

Sorry for the slow response.  Comments inline below:


On Tue, 18 Nov 2008, Torgeir Veimo wrote:


 On 15 Nov 2008, at 04:28, Andy Ritger wrote:

 I'm pleased to announce a new video API for Unix and Unix-like platforms,
 and a technology preview implementation of this API from NVIDIA.

 * Defines an API for post-processing of decoded video, including
   temporal and spatial deinterlacing, inverse telecine, and noise
   reduction.

 What about interlaced output and TV output. Is that still possible with this 
 API?

 Is field parity observed when outputting interlaced material? I think it's 
 equally important to have good support for baseline mpeg2 in addition to 
 other codecs, and this would imply that interlaced, field parity correct 
 mpeg2 output on standard s-video / rgb should be fully working.

If the application takes advantage of VDPAU's de-interlacing (the current
mplayer patches to use VDPAU do _not_ yet enable VDPAU's deinterlacing), 
then the end result of VDPAU's presentation queue is a progressive frame.

If the application doesn't enable de-interlacing, NVIDIA's VDPAU
implementation will currently copy the weaved frame to the progressive
surface, and whether it will come out correctly will depend whether the
window's offset from the start of the screen is odd or even.


 * Defines an API for timestamp-based presentation of final video
   frames.

 This is interesting. Can such timestamps be synchronised with HDMI audio in 
 some ways to guarantee judder free and audio resync free output? Ie, no need 
 to resample audio to compensate for clock drift?

You can query the current time, the time any previously presented frame
was first displayed, and specify the desired presentation time of each
frame as it is enqueued in the PresentationQueue.  This information is
hopefully sufficient to synchronize the audio stream with the presented
video frames.

My understanding of audio over HDMI is that the GPU normally does a
simple pass-through of the audio stream coming from the audio hardware in
the computer.  The VDPAU presentation queue's timestamps use a different
crystal than the audio hardware, so I believe there would always be the
potential for drift.


 * Defines an API for compositing sub-picture, on-screen display,
   and other UI elements.

 I assume this indicates that video can easily be used as textures for opengl 
 surfaces, and that opengl surfaces (with alpha transparency support) can 
 easily be superimposed over video output?

Yes.  The VDPAU presentation queue can be created wrt any X drawable.
So you should be able have VDPAU deliver final frames to an X pixmap,
and then use the GLX_EXT_texture_from_pixmap extension to texture from
that pixmap.  Note: there is not yet an API in place for synchronizing
between VDPAU and OpenGL, so this would be somewhat racy today.


 These patches include changes against libavcodec, libavutil, ffmpeg,

 and MPlayer itself; they may serve as an example of how to use VDPAU.

 Would it be possible to provide a standalone playback test program that 
 illustrates the api usage outside of mplayer?

The test program would need to decode the data to extract the bit stream
to pass into VDPAU, so it is a non-trivial amount of code.

I'm sure if someone wanted to write a standalone VDPAU test app, others
interested in using VDPAU would benefit.


 If other hardware vendors are interested, they are welcome to also
 provide implementations of VDPAU.  The VDPAU API was designed to allow
 a vendor backend to be selected at run time.

 It would be helpful to have an open source no output backend to allow 
 compile  run test when supported hardware is not available. This would also 
 help accelerate support for any software backend if anyone should choose to 
 implement one.

You mean a pure software implementation of VDPAU?  Yes, that would
be interesting.  Someone could probably do that by wiring up the VDPAU
entry points to an existing software implementation of these codecs.
For now, the engineers at NVIDIA are going to focus on bugfixing our
GPU-based implementation of VDPAU.


 VC-1 support in NVIDIA's VDPAU implementation currently requires GeForce
 9300 GS, GeForce 9200M GS, GeForce 9300M GS, or GeForce 9300M GS.


 So only mobile chipsets supports VC-1 output currently?

 It seems that the marketplace seems to be missing a 9500 GT based gfx card 
 with passive cooling, low form factor and hdmi enabled output...

The GeForce 9300 GS is a desktop GPU.  I expect there are passive
cooled configs.

Thanks,
- Andy


 -- 
 Torgeir Veimo
 [EMAIL PROTECTED]





___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: New Video Decode and Presentation API

2008-11-17 Thread Torgeir Veimo

On 15 Nov 2008, at 04:28, Andy Ritger wrote:

 I'm pleased to announce a new video API for Unix and Unix-like  
 platforms,
 and a technology preview implementation of this API from NVIDIA.

 * Defines an API for post-processing of decoded video, including
   temporal and spatial deinterlacing, inverse telecine, and noise
   reduction.

What about interlaced output and TV output. Is that still possible  
with this API?

Is field parity observed when outputting interlaced material? I think  
it's equally important to have good support for baseline mpeg2 in  
addition to other codecs, and this would imply that interlaced, field  
parity correct mpeg2 output on standard s-video / rgb should be fully  
working.

 * Defines an API for timestamp-based presentation of final video
   frames.

This is interesting. Can such timestamps be synchronised with HDMI  
audio in some ways to guarantee judder free and audio resync free  
output? Ie, no need to resample audio to compensate for clock drift?

 * Defines an API for compositing sub-picture, on-screen display,
   and other UI elements.

I assume this indicates that video can easily be used as textures for  
opengl surfaces, and that opengl surfaces (with alpha transparency  
support) can easily be superimposed over video output?

 These patches include changes against libavcodec, libavutil, ffmpeg,

 and MPlayer itself; they may serve as an example of how to use VDPAU.

Would it be possible to provide a standalone playback test program  
that illustrates the api usage outside of mplayer?

 If other hardware vendors are interested, they are welcome to also
 provide implementations of VDPAU.  The VDPAU API was designed to allow
 a vendor backend to be selected at run time.

It would be helpful to have an open source no output backend to  
allow compile  run test when supported hardware is not available.  
This would also help accelerate support for any software backend if  
anyone should choose to implement one.

 VC-1 support in NVIDIA's VDPAU implementation currently requires  
 GeForce
 9300 GS, GeForce 9200M GS, GeForce 9300M GS, or GeForce 9300M GS.


So only mobile chipsets supports VC-1 output currently?

It seems that the marketplace seems to be missing a 9500 GT based gfx  
card with passive cooling, low form factor and hdmi enabled output...

-- 
Torgeir Veimo
[EMAIL PROTECTED]




___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg