Re: [Libav-user] Best way to create a fractional ms timer?

2013-04-01 Thread René J . V . Bertin
On 1 April 2013 10:45, Mike Versteeg  wrote:

> Thanks John. As I said this is more prone to jitter than a timer event.
> I've also remember to change the priority but oddly enough, at least on W7,
> it makes no difference: I still get the occasional (exact) 10 ms delay,
> which almost never happens when using a timer.
>

I presume MS Windows still doesn't provide timers with <1ms resolution, be
it event-based or something like usleep/nanosleep? I haven't seen your
original post, but I've already struggled myself to get reliable 1kHz
realtime behaviour without external hardware. I ended up with a busy-wait
loop, (hogging the CPU was acceptable in my case).

R
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] Source code debugging libav using Xcode

2013-03-28 Thread René J . V . Bertin
In xcode 3, I simply add a new external build target to which I add the ffmpeg 
source tree. Add that target as a dependency to your own target, and you should 
be set. If you want to be extra sure, build ffmpeg with the same compiler you 
use in your project.
BTW, I have a git project up over on github.com/RJVB that shows how to make a 
monolithic framework out of the FFmpeg libs .
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] QTKit -> Libav: has it ever been done?

2013-03-27 Thread René J . V . Bertin
On 27 March 2013 10:26, Carl Eugen Hoyos  wrote:

> No, I am not saying that.
> (I don't know.)

Heh, I'd be surprised if it does exist. If indeed very few FFmpeg devs
have OS X hardware, what would be the reason of existence for such a
decoder? :)

> It is of course possible that QTKit uses a completely
> different format, but one way to find out is to test
> your resampling wrapper code with a decoder for which
> you know the actual format.

Yes. And in parallel, one could dump the captured output, not in some
container format as I suggested before, but the raw QTSampleBuffers.
It shouldn't be too hard to extract the necessary API (structure
definition(s), stub functions, etc) so that one can have
platform-independent code for locating the data of interest in those
imported QTSampleBuffers and feed it into libav.

Provided of course that the encoding part of Brad's code doesn't make
use of OS X specific programming language features like ARC. (Which is
keeping me from playing with his code because it won't build on my
older OS version ...)

> Note that an endianess issue is very unlikely because FFmpeg
> only supports native endian audio formats (as opposed to

So if the QTSampleBuffers contain non-native endian data the
FFmpeg-encoded output will inevitably be the "wrong way around" unless
it is converted before being encoded. Not?

> codecs), a signed / unsigned problem is of course possible
> but this should be relatively easy to verify.

There must be something relatively simple that underlies Brad's
problem. After all, video encoding works fine, so his general approach
cannot be completely wrong ...
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] QTKit -> Libav: has it ever been done?

2013-03-27 Thread René J . V . Bertin
On 27 March 2013 09:58, Carl Eugen Hoyos  wrote:

> not use QTKit? There is definitely a native decoder
> that outputs the format that (you believe) QTKit
> offers, and that code would not be OSX specific in
> the end.

Are you saying there's a decoder that outputs the decoded content in
QTSampleBuffer format, tested to be accepted as input by QTKit?

Even if that's the case (and I'm not doubting your word on it), that
still doesn't guarantee that those buffers are filled with the same
kind of data. I recall that Brad's problem is with audio, and the
symptoms suggest that there is some kind of misinterpretation of the
soundbytes. It could be as simple as a disagreement on endianness, or
signed vs. unsigned. His words ('ear piercing screams') also evoke
what happens when you put a mic too close to the speaker it feeds
into, but I fail to see how one would achieve that kind of ringing by
accident in software :)

Would it also be an idea to dump the captured content to a supported
container file, ideally without any additional processing of course,
use that as the input, and try to analyse things from there?

R.
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] Converting audio sample buffer format

2013-02-25 Thread René J . V . Bertin


Carl Eugen Hoyos  wrote:

>Brad O'Hearne  writes:
>
>> On Feb 18, 2013, at 3:50 PM, Carl Eugen Hoyos  wrote:
>> 
>> > While I have _no_ idea what the "flv audio codec" could 
>> > be, please use either the aconvert filter or libswresample 
>> > directly to convert from one audio format to another.
>> 
>> This has turned out to be much more difficult than expected.
>
>Before you start debugging (the cast to sourceData looks 
>suspicious): Did you look at doc/examples/filtering_audio.c 
>and doc/examples/resampling_audio.c ?
>I suspect using the aconvert filter has the advantage that 
>you can do other changes to the audio without additional 
>code (and bugs).
>
>In any case, using gdb should quickly show you were the 
>problem lies.
>
>Carl Eugen
>
>___
>Libav-user mailing list
>Libav-user@ffmpeg.org
>http://ffmpeg.org/mailman/listinfo/libav-user


Exactly, but you'd need to build the libav libs yourself, with debugging info.

There's another thing that's nagging me. IIUC, the goal here is to convert a 
buffer of (C) floats into signed shorts. I have some difficulty believing that 
doing this through a generic workhouse function can be more efficient than 
writing a simple loop and let a good optimising compiler create the best 
assembly out of it ...

R.
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] How to Convert AV_PIX_FMT_RGB24 to AV_PIX_FMT_YUV420P

2013-02-15 Thread René J . V . Bertin


Chris Share  wrote:

>Thanks for the sample code.
>
>I have a couple of further questions:
>
>1. If the input data is in a vector of unsigned chars (RGBRGB...), what
>is the best pixel format to use?

I think that'd by FMT_RGB24 .

R
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] ff_log2_tab defined multiple times in the fmpeg 1.1 libraries, bug or feature ??

2013-02-15 Thread René J . V . Bertin
>Everything prefixed with ff are private symbols in ffmpeg, and as such
>are not supposed to be exposed in link libraries, because they are not
>part of the public API/ABI.
>There are already too many exceptions to this, and this avoided even
>more.

This pleads for static local copies.

BTW, would there be much overhead in declaring ff_log2_tab locally in each 
function that uses it, with current day compilers?

R
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] How can i suspend ffmpeg's file reading?

2012-12-07 Thread René J . V . Bertin
Hello Jack

How do you launch ffmpeg and on what os? If you have a process id for it, you 
could sigstop/suspend it while you wait for data, but that too would of course 
halt the playing. I've never tried, but I suspect that it would however resume 
playing when you sigcont/release the process.
Alternatively, buffer your stream, measuring download speed and expected 
bitrate during playback, so that you can launch ffmpeg only when you have 
downloaded enough content to be reasonably sure that you won't catch up with 
the download head. This is more or less how QuickTime handles streaming 
playback, but it isn't failure proof against net congestion or other unforeseen 
events.

René

Quy Pham Sy  wrote:

>Hi,
>
>My ffmpeg-based player plays a mpeg-ts file that downloaded from a
>server.
>I want my player be able to play the video file
>even when it is being downloaded.
>
>There is no problem if the download speed is faster than reading speed,
>otherwise, it just stop playing.
>My question is that is there anyway to suspend ffmpeg file reading to
>wait
>for data available to proceed?
>
>Thanks
>Jack
>
>
>
>
>___
>Libav-user mailing list
>Libav-user@ffmpeg.org
>http://ffmpeg.org/mailman/listinfo/libav-user

___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] get RGB values from ffmpeg frame

2012-11-20 Thread René J . V . Bertin
Have you looked at the tutorial concerning ffmpeg wit sdl, and/or the ffplay 
sourcecode? Other things to check out would be the motion jpeg encoder - afaik 
jpg images are in rgb space.

R

Navin  wrote:

>Been trying these techniques, but I keep getting crashes. Help please?:
>
> avpicture_fill((AVPicture*) frame2, frame2_buffer, 
>PIX_FMT_RGB24, width2, height2);
> //printf("data = %d, %d, %d, %d 
>\n",frame2->data[0],frame2->data[1],frame2->data[2],frame2->data[3]);
> //printf("linesize = %d %d %d\n",frame2->linesize[0], 
>frame2->linesize[1], frame2->linesize[2]);
> //printf("width = %d\n", pCodecCtx->width);
> //printf("height = %d\n", pCodecCtx->height);
> //std::cin.get();
>
> int linesize = frame2->linesize[0];
> for(int xx = 0; xx < (linesize * width1)-1; xx += 3)
> {
> int r = frame2->data[0][xx];//int r = frame2->data[0][xx];
> int g = frame2->data[0][xx+1];
> int b = frame2->data[0][xx+2];
> printf("xx=%d r=%d, g=%d, b=%d \n",xx, r, 
>g, b);
> }
> printf("frame%d done",i++);
> //for(int xx = 0; xx < width1; xx = xx + 3)
> //{
> //for(int yy = 0; yy < height1; ++yy)
> //{
> ////int p = xx*3 + yy*frame2->linesize[0];
> ////int p = xx * 3 + yy * linesize;
> //printf("yy=%d xx=%d",yy,xx);
> //int p = yy * linesize + xx;
> //printf("p=%d\n",p);
> //int r = frame2->data[0][p];
> //int g = frame2->data[0][p+1];
> //int b = frame2->data[0][p+2];
> //printf("[r=%d, g=%d, b=%d ]\n", r, g, b);
> //}//for
> //}//for
>
>Nav
>
>On 11/20/2012 8:52 AM, Nav wrote:
>> Hi! Glad to be part of this mailing list.
>> What I wanted to create, was a program which would receive a
>streaming 
>> video, and when it decodes a frame of the video into either a bitmap 
>> format or just pure RGB (perhaps stored in a char array), it would 
>> notify another program that it has received a frame, and the other 
>> program would take the RGB values and display it.
>> I've already asked this question here: 
>> http://ffmpeg.zeranoe.com/forum/viewtopic.php?f=15&t=805
>> and rogerdpack told me to post my question on the libav mailing list.
>> I have been through many websites, but they either use img_convert 
>> (which doesn't work) or sws_scale, which crashes when I try to use it
>
>> with RGB.
>> Could anyone help with a complete piece of code which can give me the
>
>> RGB values of a frame?
>>
>> This is a part of the YUV conversion that I tried initially.
>>
>>   i=0;
>>   while(av_read_frame(pFormatCtx, &packet) >= 0)
>>   {
>> // Is this a packet from the video stream?
>> if(packet.stream_index==videoStream)
>> {
>> // Decode video frame
>> avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,
>&packet);
>>
>> // Did we get a video frame?
>> if(frameFinished)
>> {
>> // Convert the image into YUV format that SDL uses
>> sws_scale( sws_ctx, (uint8_t const * const *)pFrame->data, 
>> pFrame->linesize, 0, pCodecCtx->height, pict.data, pict.linesize );
>>
>>
>___
>Libav-user mailing list
>Libav-user@ffmpeg.org
>http://ffmpeg.org/mailman/listinfo/libav-user

___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user