Another thing to think about is that you may not need to split the videos at 
all.  If you have lots of video files instead of a few big ones, and each can 
more or less be processed independently then you can use something like nline 
input format, not nline itself necessarily but something like it, to process 
each video separately.  You would have to write the code to read in the video 
file, but there are APIs to do that, like OpenCV.  This is what I did in the 
past to train and score machine learned classifiers on image and video files 
using Hadoop.

--Bobby Evans

On 9/19/11 11:54 PM, "Swathi V" <swat...@zinniasystems.com> wrote:

This link might help you...
example 
<http://musicmachinery.com/2011/09/04/how-to-process-a-million-songs-in-20-minutes/>

On Tue, Sep 20, 2011 at 9:52 AM, Rajen Bhatt (RBEI/EST1) 
<rajen.bh...@in.bosch.com> wrote:
Dear MapReduce User Groups:
We want to process large amount of videos (typically 30 days old storage with 
size around 1TB) using Hadoop.
Can somebody point me to code samples or classes which can take video files in 
its original compressed format (H.264, MPEG-4) and then process using Mappers?
Thanks and Regards,

~~
Dr. Rajen Bhatt
(Corporate Research @ Robert Bosch, India)
Off: +91-80-4191-6699
Mob: +91-9901241005
[cid:3399368505_4188812]





<<inline: image.jpg>>

Reply via email to