On Sat, 2009-10-10 at 11:08 +0200, Dirk Meyer wrote: > 1. Make video work on clutter. I guess that is Jason's task if he is > still working on it.
I am. Perhaps not as diligently as I ought to. Including the day job, I've got too many projects on the go presently. A bit of a status update, then. What I have currently is a display server prototype written in the most horrid C you've ever laid eyes upon. I haven't given up my tv/movie watching habits though, so I'm definitely dog-fooding my own code. The purpose of the display server is to receive frames from an out-of-process player and get them into a ClutterTexture in the most efficient way possible. Then, render those frames at the exact retrace cycles they need to be for smooth video. One important element of this process is the fact that a number frames from the player get buffered on the display server, which helps to smooth out any timing discrepancies from the player, and also means frame decoding and frame display aren't serialized so we can make better use of available cores. As I've mentioned before, VDPAU throws a serious wrinkle into that. With VDPAU we have three possibilities: 1. Defer entirely to MPlayer in displaying VDPAU, much like our old approach with Xv. We won't be able to pull running video on the canvas, but overlay would still be nice to have, so we're back to the old vf_overlay drawing board. (Mind you, overlay in VDPAU would be entirely different and should be simpler, but landing any code in MPlayer would probably require an entire vo-agnostic overlay subsystem, I think.) 2. Use texture-from-pixmap to get a VDPAU surface into a GL texture on the canvas. This is problematic as I'm not quite sure if it's possible to queue frames with this approach, and in any case the performance seems to is dubious. On my 9400, with frame-doubling deinterlacing (-vo vdpau:deint=2 or 3) 1080i30 to 1080p60, it doesn't look like TFP can keep up, but I haven't yet determined whether or not this is a GPU bottleneck or some other timing problem. 3. A combination of #1 and #2. When watching video in full-screen mode (normal viewing), use #1, and preferably have some overlay. When the user's in the menus or we otherwise want video on the canvas, we can start redirecting the VDPAU window, possibly disabling deinterlacing while redirecting. I'm really hoping for #2, because it's a more unified pipeline with the software decoding scenario. And also because it has more long-term viability. Hardware decoding is always just a stop-gap until CPU speeds catch up. But I need to a) sort out whether or not I can queue redirected frames (I vaguely think it should be possible but I need to verify), and b) determine if the 60fps performance issue is a bottleneck somewhere or just timing. There's one thing I could use some help with: getting some good 2x deinterlacers working with clutter. All the important ingredients are floating around, they just need to be sifted out: 1. yadif as an ASM shader in mythtv trunk (libs/libmythtv/openglvideo.cpp) 2. Clutter doesn't support ASM shaders (GL_FRAGMENT_PROGRAM_ARB) natively, but clutter-gst (clutter-gst-video-sink.c) shows how they can be used via direct GL calls. This is an ideal task for someone to help out with because it can be done fairly autonomously from everything else. Jason. ------------------------------------------------------------------------------ Come build with us! The BlackBerry(R) Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9 - 12, 2009. Register now! http://p.sf.net/sfu/devconference _______________________________________________ Freevo-devel mailing list Freevo-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/freevo-devel