I once wrote a filter for mplayer that searched for black frames.  The
idea was to find the 1 or more frames that most frequently sit between
"content" and the commercials.

It was quite effective at finding the black frames, but because it had
to decode and analyse each frame (in actuality it was just a filter so
it didn't really decode frames, mplayer proper did that), on the CPU I
was using at the time, it was just not practical.  Full out, I could
only run it at about 2-2.5x normal speed and the reality was that I
could search manually through a stream for commercials faster than that.

Now it struck me that given the format of mjpeg streams, it's really not
necessary to decode each frame before you decide it's not a "black
frame" and because subsequent frames do not depend on previous frames
(as they do in the likes of an MPEG stream), the theory is that one only
needs to decode enough of each frame to determine that it's not a black
frame.  Most times this should be doable in the first few scan lines.
This process should be quite efficient!

So my question is, how well does the framework of the mjpegtools match
the theory?  Could I write a "searcher" using the framework that is
already present in the mjpegtools suite to only decode enough of a frame
to decide that it's not a black frame and then move on to the next
frame?

I am thinking output would be an editlist that cut the commercials, then
the lav suite of tools could do whatever it wanted with the editlist to
manipulate a commercial free stream.

Thots?

b.


Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to