Re: [FFmpeg-user] Frame rates and video accelerated

2015-02-09 Thread Carl Eugen Hoyos
Werner Robitza  gmail.com> writes:

> On Mon, Feb 9, 2015 at 9:10 PM,   elmaxsrl.it> wrote:
> > I'm pretty sure that the raw H264 hasn't the timestamps.
> 
> The NAL units themselves, no, but AFAIK the SPS 
> can indicate the time base according to the VUI 
> information in Annex E of H.264.

FFmpeg unfortunately does not always read 
these timestamps correctly...

Carl Eugen

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Frame rates and video accelerated

2015-02-09 Thread Werner Robitza
On Mon, Feb 9, 2015 at 9:10 PM,   wrote:
>I'm pretty sure that the raw H264 hasn't the timestamps.

The NAL units themselves, no, but AFAIK the SPS can indicate the time
base according to the VUI information in Annex E of H.264. So you have
to make sure that this is present for ffmpeg to detect it.

>Then I prepare the command to launch by sw the FFMPEG:
>  ffmpeg -y -i IMAGE_FILE.h264 -vf scale=960:-1 -c:v mpeg4 -q:v 1 
> IMAGE_FILE.mp4
> ...
>Stream #0:0: Video: h264 (Baseline), yuv420p, 1280x720, 25 fps, 25 
> tbr, 1200k tbn, 50 tbc

It assumes 25 fps here. Like I said, try "-r 12.5" or whatever you need, e.g.

ffmpeg -r 12.5 -i input output

(Please do not top-post on this mailing list. Thanks!)
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


Re: [FFmpeg-user] Frame rates and video accelerated

2015-02-09 Thread chieppa

   Dear Sir,
   Yes, is Annex B compliant.
   I take the raw H264 from camera. In this example is a Sony IPELA.
   Below the code that receives the raw data from the camera, via RTSP (as I 
said I use Live555):
   unsigned int num = 0;
   shared_ptr > imageNAL = make_shared >();
   char const* sPropPar = _fSubsession.fmtp_spropparametersets();
   if(sPropPar == nullptr || strlen(sPropPar) == 0) 
envir()<<"\n***spropparameterssteds NULL***\n";
   SPropRecord *sps = parseSPropParameterSets(sPropPar, num);
for(register unsigned k = 0; k < num; ++k) {
   if(sps[k].sPropLength == 0) {
   envir()<<"\n***SPS NULL***\n";
   continue;
   }
   unsigned nalUnitSize = sps[k].sPropLength;
   unsigned char* nalUnitBytes = sps[k].sPropBytes;
   //write  the SPS & PPS
   imageNAL->push_back(0x00);
   imageNAL->push_back(0x00);
   imageNAL->push_back(0x00);
   imageNAL->push_back(0x01);
   for(register unsigned i = 0; i < nalUnitSize; i++) {
   imageNAL->push_back(static_cast(nalUnitBytes[i]));
   }
//#define DEBUG_PPS
/*#ifdef DEBUG_PPS
std::cout<size(); index++) {
   std::cout<(imageNAL->at(index)) & 0xFF)<<" ";
}
std::cout<push_back(0x00);
   imageNAL->push_back(0x00);
   imageNAL->push_back(0x00);
   imageNAL->push_back(0x01);
   if(frameSize >= DUMMY_SINK_RECEIVE_BUFFER_SIZE_H264) {
   frameSize = DUMMY_SINK_RECEIVE_BUFFER_SIZE_H264-1;
   envir()<<"\n***Discard bytes exceed framesize***\n";
   }
   for(register unsigned i = 0; i < frameSize; i++) {
   imageNAL->push_back(_fReceiveBuffer[i]);
   } 
   I save the H264 taken in a file. I take about 10 sec. when an intrution is 
detected.
   I'm pretty sure that the raw H264 hasn't the timestamps.

   Then I prepare the command to launch by sw the FFMPEG:
 ffmpeg -y -i IMAGE_FILE.h264 -vf scale=960:-1 -c:v mpeg4 -q:v 1 
IMAGE_FILE.mp4

   the result is:
   Â
   ffmpeg version N-66745-g0578623 Copyright (c) 2000-2014 the FFmpeg developers
 built on Oct  9 2014 02:15:39 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
 configuration: --prefix=HOME/ffmpeg_build --bindir=/home/cchieppa/bin
 libavutil  54. 10.100 / 54. 10.100
 libavcodec 56.  4.101 / 56.  4.101
 libavformat56.  9.100 / 56.  9.100
 libavdevice56.  1.100 / 56.  1.100
 libavfilter 5.  1.103 /  5.  1.103
 libswscale  3.  1.100 /  3.  1.100
 libswresample   1.  1.100 /  1.  1.100
   Input #0, h264, from '/home/cchieppa/Documents/VideoHUB5/image_SONY.h264':
 Duration: N/A, bitrate: N/A
   Stream #0:0: Video: h264 (Baseline), yuv420p, 1280x720, 25 fps, 25 tbr, 
1200k tbn, 50 tbc
   Output #0, mp4, to '/home/cchieppa/Documents/VideoHUB5/image_SONY.mp4':
 Metadata:
   encoder : Lavf56.9.100
   Stream #0:0: Video: mpeg4 ( [0][0][0] / 0x0020), yuv420p, 960x540, 
q=2-31, 200 kb/s, 25 fps, 12800 tbn, 25 tbc
   Metadata:
 encoder : Lavc56.4.101 mpeg4
   Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> mpeg4 (native))
   Press [q] to stop, [?] for help
   frame=  336 fps= 41 q=1.0 Lsize=   11566kB time=00:00:13.88 
bitrate=6826.1kbits/s
   video:11563kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB 
muxing overhead: 0.020902%

   In this case, the MP4 is too fast: I see the seconds on the video that go 
around x2 times fast.
   I don't understand why.
   Thank you
   Cristiano

   Da ffmpeg-user-boun...@ffmpeg.org

   A "FFmpeg user questions" ffmpeg-user@ffmpeg.org

   Cc

   Data Mon, 9 Feb 2015 19:24:20 +0100

   Oggetto Re: [FFmpeg-user] Frame rates and video accelerated

   > On Sat, Feb 7, 2015 at 6:05 PM, Cristiano Chieppa

   >  wrote:

   > > I save the raw frames (NALs) in a file (some frames grabbed from a 
camera) and I invoke the ffmpeg from the sw - like a batch program - to encode 
the raw H264 in MP4.

   > > The conversion works well.

   >

   > Out of curiosity, how exactly are you doing it? Is the file basically

   > an Annex B stream?

   >

   > > Sometimes, under circumstance that I’m not able to understand, I see 
that the generated mp4 video is like “accelerated”: the speed is very fast 
compare with the real stream.

   > > Is there a way to invoke ffmpeg and tell him, as a parameter, the 
timestamp of each grabbed frames to get the real synchronized video?

   >

   > ffmpeg -r 25 -i input output

   >

   > This is how you force the frame rate.

   >

   > Actually, when you do "ffmpeg -i input -c:v libx264 output.h264", the

   > output file indicates the frame rate when you read it back in

Re: [FFmpeg-user] Frame rates and video accelerated

2015-02-09 Thread Werner Robitza
On Sat, Feb 7, 2015 at 6:05 PM, Cristiano Chieppa
 wrote:
> I save the raw frames (NALs) in a file (some frames grabbed from a camera) 
> and I invoke the ffmpeg from the sw - like a batch program - to encode the 
> raw H264 in MP4.
> The conversion works well.

Out of curiosity, how exactly are you doing it? Is the file basically
an Annex B stream?

> Sometimes, under circumstance that I’m not able to understand,  I see that 
> the generated mp4 video is like “accelerated”: the speed is very fast compare 
> with the real stream.
> Is there a way to invoke ffmpeg and tell him, as a parameter, the timestamp 
> of each grabbed frames to get the real synchronized video?

ffmpeg -r 25 -i input output

This is how you force the frame rate.

Actually, when you do "ffmpeg -i input -c:v libx264 output.h264", the
output file indicates the frame rate when you read it back in with
"ffmpeg -i output.h264". And when you force a different frame rate,
with for example "ffmpeg -i input -c:v libx264 -r 12 output.h264",
this frame rate change is visible when doing "ffmpeg -i output.h264".
This is due to the timing information being written into the Sequence
Parameter Set (SPS).
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user


[FFmpeg-user] Frame rates and video accelerated

2015-02-07 Thread Cristiano Chieppa
Dear Users,
I’m newbie with FFMpeg so, apologize if the question is trivial for you.
I’m getting a raw H264 bitstream  (in C++) via RTSP from IP camera (really I’m 
using many models). I haven’t the audio.

I save the raw frames (NALs) in a file (some frames grabbed from a camera) and 
I invoke the ffmpeg from the sw - like a batch program - to encode the raw H264 
in MP4.

The conversion works well.

Sometimes, under circumstance that I’m not able to understand,  I see that the 
generated mp4 video is like “accelerated”: the speed is very fast compare with 
the real stream.

I’m struggling with this behavior and I’m trying to understand why. I tried to 
find on Google other questions like the mine without find a good answer.

If I well understand (please, kindly correct me), raw H264 doesn’t contain the 
timestamp of the frames: should be responsibility of the container.
Furthermore, the streaming from IP camera should  not at constant frames per 
rates, so their duration could be very different (again, please correct me).

During the grabbing of the NALs (I’m using Live555) I get the timestamps too 
from RTP: I save these in another file.

Is there a way to invoke ffmpeg and tell him, as a parameter, the timestamp of 
each grabbed frames to get the real synchronized video?

Or, better, does someone met this problem and solved?

I’m sure that I wrong something.

I’m very confused.

Thank you

Cristiano


___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user