https://bugs.kde.org/show_bug.cgi?id=427361

--- Comment #3 from skoupad-bugzi...@yahoo.com ---
I just had a thought about how this could be implemented. How about, along with
the visual content of the videos, to also use the sound of the videos to try to
match them? Kinda like how 'Fuzzy View' works for still images, but for the
sound. Some kind of algorithm like what YouTube has for finding if someone used
a song in their videos. It can also be customizable about how accurate the user
would want it to be. YouTube's algorithm can find a song even if someone just
whistled a tune. I am not a programmer but I think the waveform of the sound of
the videos should be unique enough to be accurately fingerprinted.

I can imaging a scenario where someone edited a wedding video and wants to see
how many small clips he used on the main, longer and final video. Using time
stamps would not work in this case because the length of the videos doesn't
match. But if you look for a waveform fingerprint of the smaller clips if they
are inside the waveform of the final and longer video, then this should give
the desirable results.

Of course this would not work for videos without sound, so this should not be
the only think that is checked. Also, when the fingerprints of the videos are
made, the function should ignore sections that happen to be silent, it should
choose a spot in the video with sound enough to make a good waveform
fingerprint.

-- 
You are receiving this mail because:
You are watching all bug changes.

Reply via email to