Here's the flow as I understand it, correct me if your hardware works differently:
The video sink calls on the ISurface createOverlay method to instantiate a hardware overlay. SurfaceFlinger calls the include/ hardware/overlay.h abstraction layer to ask the hardware resource manager for an overlay interface. SurfaceFlinger retains an interface to the hardware and passes an IOverlay abstraction interface back to the video sink. This gives SurfaceFlinger an interface to call on if the display changes (position, size, rotation, etc). When playback initializes, an encoded frame flows into the hardware decoder through the OMX decoder node. There are probably a number of frames queued up on the encoder input port so that the video pipeline is not interrupted if the application processor has to go do something really important for awhile. The frame is decoded and queued in the overlay frame buffer queue. There may be several of these encoded and queued - up to the limit of the hardware. When all the nodes in the graph are prepared for playback, the playback clock goes active. The player engine calls the video sink writeAsync interface to trigger the presentation of the first frame of video. In turn it calls the IOverlay swap method which tells the overlay hardware to display the first frame. If the swap method blocks, this should not occur on the AO thread, but on a separate thread, so as not to block the rest of the player engine. This process repeats as each video frame is supposed to be presented. If a seek occurs, the player engine calls the encode and video sink nodes Flush() method to flush their buffers. This ensures that the frames are in sync the next time playback starts. I guess adding a buffer identifier would add a level of assurance that OpenCore is maintain correct sync, but it seems unnecessary. If you really want it, you could pass it through the opaque pointer interface from the decoder to the video sink and from there on to the overlay hardware. The ISurface postBuffer method is only used in the case where there is no hardware overlay and SurfaceFlinger is compositing the video with the rest of the 2D surfaces. For overlay hardware, the 2D engine just leaves a hole in the 2D compositor (i.e. a black frame with alpha = 0) for the hardware overlay to show its pixels through. On Dec 18, 11:25 am, Andrew Green <andyg54...@hotmail.com> wrote: > The swapBuffer call doesn't take a parameter, right? I guess the > implementation could keep track of which buffer was next internally. It's > just seems somewhat less precise and inflexible to me. Also if the parameter > is added it then becomes very similar to ISurface and maybe IOverlay isn't > needed. I may not be understanding how IOverlay will be used. If anyone is > missing any nuances it is me. I admittedly don't yet understand how the HAL > overlay and IOverlay relate. > > Thanks, > Andrew > > ---------------------------------------- > > > > > Date: Thu, 18 Dec 2008 08:32:33 -0800 > > Subject: Re: OMX source/sink and Tunnelling of processing components > > From: davidspa...@android.com > > To: android-framework@googlegroups.com > > > Why doesn't the swap interface work for multiple buffers? Is there > > some nuance I'm missing? > > > On Dec 18, 6:06 am, Andrew Green wrote: > >> It seems like it would be nice to maintain the postBuffer(offset) API of > >> ISurface even for hardware overlays because the overlay could have more > >> than two buffers. Overlays with only two buffers could still just do page > >> flips. If that API is kept then is IOverlay even needed? > > >> To expand on the need for multiple overlay buffers - in our system the > >> overlay driver needs to be the one to allocate video buffers. These > >> buffers can be handed to the OMX codecs to be filled with video pixels. > >> After being filled with video they can be handed back to the the overlay > >> driver for display. More than two output buffers are needed to keep this > >> running smoothly. > > >> Thanks, > >> Andrew > > >>> Date: Wed, 17 Dec 2008 21:34:33 -0800 > >>> Subject: Re: OMX source/sink and Tunnelling of processing components > >>> From: davidspa...@android.com > >>> To: android-framework@googlegroups.com > > >>> Yes, PVPlayer still uses an ISurface. But instead of pushing instead > >>> of pushing frames into SurfaceFlinger, it just does a page flip on the > >>> IOverlay interface it gets from SurfaceFlinger. > > >>> On Dec 17, 7:25 pm, hanchao3c wrote: > >>>> Hi Sparks! > >>>> I will look at the cupcake code for it. > >>>> I this using hardware overlay is good solution. > > >>>> But do you mean PVPlayer neen't chang by using hardware overlay? > > >>>> Just using ISurface at its interface ? > > >>>> On Dec 18, 10:37 am, Dave Sparks wrote: > > >>>>> 1. If the MIO is active, then it controls its own sink using the > >>>>> timestamps on the frames. If the MIO is passive, the frames should be > >>>>> displayed as they arrive. > > >>>>> 2. I think you are talking about the case where the video is displayed > >>>>> by hardware overlay and not using the 2D engine (SurfaceFlinger). In > >>>>> this case, you need to use the new overlay abstraction as described in > >>>>> the other thread. PVPlayer does not get involved at all, it only needs > >>>>> to know the size of the video, not the window display size. Position, > >>>>> scaling, and rotation is handled either the SurfaceFlinger or by the > >>>>> overlay hardware. > > >>>>> 3. Color key is also in the hardware overlay abstraction. > > >>>>> On Dec 17, 6:06 pm, hanchao3c wrote: > > >>>>>> Hi Dave Sparks ! > >>>>>> Thank you very much. > >>>>>> I thought this solution is well, but then I found it is not perfect > >>>>>> I think the "control interface problem" are: > >>>>>> 1. By using it , Video will be render to output device directly. Who > >>>>>> can do the AV sync? > >>>>>> 2. If high level software want to change the video size or > >>>>>> orientation, How to control PVPlayer ? > >>>>>> 3. If we want to change to Color-key to control to layers. (Case: When > >>>>>> playing a video , other application is launched) > > >>>>>> On Dec 17, 10:08 pm, jasperr wrote: > > >>>>>>> I found it's the video decoder node that allocates buffers for its > >>>>>>> video decoder OMX component now. > >>>>>>> (in function "provideBuffersforcomponent") > > >>>>>>> If I want the video dec omx component to use buffers allocated by the > >>>>>>> video surface output. It seems I have to modify this code. > > >>>>>>> Is there a easy way for the video decoder node to find its peer, the > >>>>>>> video sink node? > >>>>>>> So the video sink node can let the surface ouput to allocate buffers. > > >>>>>>> Thanks > >>>>>>> Amanda > > >> _________________________________________________________________ > >> Send e-mail faster without improving your typing > >> skills.http://windowslive.com/Explore/hotmail?ocid=TXT_TAGLM_WL_hotmail_acq_... > > _________________________________________________________________ > Send e-mail anywhere. No map, no > compass.http://windowslive.com/Explore/hotmail?ocid=TXT_TAGLM_WL_hotmail_acq_... --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "android-framework" group. To post to this group, send email to android-framework@googlegroups.com To unsubscribe from this group, send email to android-framework+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/android-framework?hl=en -~----------~----~----~----~------~----~------~--~---