Re: [FFmpeg-user] VAAPI decoding/encoding of several video streams in parallel

2016-12-20 Thread Moritz Barsnick
On Tue, Dec 20, 2016 at 22:21:12 +0400, Anton Sviridenko wrote:
> http://ffmpeg.org/pipermail/ffmpeg-user/2016-December/034530.html
[...]
> 3) Can I use single hwdevice and vaapi_context instances for all
> streams or there should be own instance for each decoded/encoded
> stream?

You'll have more luck getting help on the appropriate mailing list
libav-user:
https://lists.ffmpeg.org/mailman/listinfo/libav-user/
which is dedicated to the use of the libav* libraries, while this list
concentrates on the command line tools.

That mentioned thread was already off-topic, and quite lucky it got a
good answer, if you ask me. ;-)

Cheers,
Moritz
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Why is length and bitrate zero?

2016-12-20 Thread Moritz Barsnick
On Tue, Dec 20, 2016 at 13:31:25 +1300, Michael Heuberger wrote:
> But regarding the length, that still doesn't look right. Can you check 
> the lengths again with your random ones?

The lengths all look correct in smplayer (I didn't check their actual
lengths), but so does the output from your example.

Gruß,
Moritz
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] VAAPI decoding/encoding of several video streams in parallel

2016-12-20 Thread Mark Thompson
On 20/12/16 18:21, Anton Sviridenko wrote:
> I want to use hardware acceleration for processing multiple realtime
> videostreams (coming over RTSP).
> I've read this thread -
> http://ffmpeg.org/pipermail/ffmpeg-user/2016-December/034530.html
> 
> and have some questions related to scaling this everything to several
> video streams:
> 
> 1) Is it possible at all to use VAAPI acceleration for several
> independent video streams simultaneously?

Yes, the kernel drivers deal with all of the detail here - it's exactly the 
same as using your GPU to run multiple OpenGL programs at the same time.

> 2) How should I initialize VAAPI related stuff? Do I have to create
> separate hwframe context for each stream?

Not necessarily, it depends on what you want to do.

Some things to consider:
* A hwframe pool needs to be fixed-size to use as output from a decoder or 
filter (note the render_targets argument to vaCreateContext(), which you supply 
the surfaces member of AVVAAPIFramesContext to), so can be exhausted.  Decoders 
and encoders may both hold on to frames for some length of time (to use as 
reference frames, to wait for the stream delay), so a pool used by multiple of 
them needs to be large enough to not run out even when they sit on some of the 
surfaces for a while.
* All surfaces in a single hwframe context are the same size and format.  While 
it's perfectly valid to decode a frame onto a surface which is larger than the 
frame, it does waste memory so you may want to make the surfaces of the correct 
size when that is known.
* A filter or encoder should only be given input which matches the hwframe 
context you declared as its input when you created it.  This is primarily an 
API restriction and some other cases do work some of the time, but keeping to 
it will avoid any surprises.

The easiest way to do it is probably to follow what ffmpeg itself does: make a 
single hwframe context for the output of each decoder or other operation, and 
then give that to whatever the next thing is which will consume those frames.  
This won't necessarily be sufficient in all cases - if you have something more 
complex with output from multiple decoders being combined somehow then you'll 
need to think about it more carefully keeping the restrictions above in mind.

> 3) Can I use single hwdevice and vaapi_context instances for all
> streams or there should be own instance for each decoded/encoded
> stream?

Typically you will want to make one device and then use it everywhere.  
Multiple devices should also work, but note that different devices can't 
interoperate at all (so a decoder, scaler and encoder working with the same 
surfaces and hwframe context need to be using the same device, say).

You need to make exactly one struct vaapi_context for each decoder (with 
configuration appropriate to that decoder).


- Mark
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Generate/Playback Interlaced (1080i) video with lavfi.

2016-12-20 Thread Matthias, Thomas
Hi,

I’m trying to generate some smtpebars with a script, and it works fine for 
1080p video.  However, I’ve tried numerous combinations for generating 1080i 
video using the –vf tinterlace filter.  For example, if I want to generate 
1080i50 video, do I use 25000/1000 as the framerate, and then –vf tinterlace=6? 
 This doesn’t seem to work.  I’ve also tried setting the framerate to 
5/1000 and then using –vf tinterlace=4, but that seems to be giving 1080p25 
instead of true 1080i50?  I’m probably way off here, any help would be really 
appreciated.

Example command (not at all inclusive of everything I’ve tried):

ffmpeg –y –f lavfi –i “smptebars=duration=1:size=1920x1080:rate=25000/1000” 
–pix_fmt uyvy422 –vcodec rawvideo –vf tinterlace=6 smpte_bar_test.mov

More importantly, using the decklink output device, is there a way to playback 
an interlaced video?  Even if I try to playback a video that I know is 1080i, 
it always plays out as progressive from the decklink output device.

Thanks!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] VP9 to HEVC hdr demo conversion?

2016-12-20 Thread traycold
Carl Eugen Hoyos-2 wrote
> So what is missing from the FFmpeg mkv muxer?

I can't answer to your question. 
I just wanted to share my experience: using the command mentioned in my
first mail in this thread I was not able to correctly encode the HDR video
in object, while using the command mentioned in my last mail (as suggested
by Andy Furniss) the encoding was ok.



--
View this message in context: 
http://ffmpeg-users.933282.n4.nabble.com/VP9-to-HEVC-hdr-demo-conversion-tp4678255p4678650.html
Sent from the FFmpeg-users mailing list archive at Nabble.com.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] cutoff argument not passed to libmp3lame

2016-12-20 Thread Bernhard Döbler


Am 19.12.2016 um 01:56 schrieb Carl Eugen Hoyos:

2016-12-18 23:58 GMT+01:00 Bernhard Döbler :


I looked into ffmpeg documentation and saw there's an argument "-cutoff"
that should do, what "-lowpass" does in LAME.

Why do you think so / where does it say so in the documentation?


LAME documentation says:
--lowpass Frequency(kHz), lowpass filter cutoff above freq.

http://lame.cvs.sourceforge.net/viewvc/lame/lame/USAGE

FFMpeg Doc says

cutoff

   Set cutoff bandwidth in Hz.

https://ffmpeg.org/ffmpeg-codecs.html

Slightly different in every codec in ffmpeg but seems to be identical in 
principle.



Bernhard

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Fwd: [Libav-user] MP4 concatenation with MP42 major_brand

2016-12-20 Thread black copper
-- Forwarded message --
From: Nicolas George 
Date: Tue, Dec 20, 2016 at 2:12 PM
Subject: Re: [Libav-user] MP4 concatenation with MP42 major_brand
To: libav-u...@ffmpeg.org, ffmpeg-user@ffmpeg.org


Le decadi 30 frimaire, an CCXXV, black copper a écrit :
> I have MP4 source videos, that contain:
>
> video: H264
> audio: G.711 (mulaw)
>
> I want to use concatenation on source videos to get one MP4 file.
>
> I used concat filter by first creating a list: mylist.txt file as follows:
>
> file 'v1.mp4'
> file 'v2.mp4'
> file 'v3.mp4'
>
> then envoked ffmpeg command like this:
>
> ffmpeg -f concat -i mylist.txt -c copy op.mp4 -y
>
> This worked only with files that are mp41 compatible.
>
> With mp42 files that I mentioned above, I get this error:
>
> [mp4 @ 0x24ca780] Could not find tag for codec pcm_mulaw in stream #1,
> codec not currently supported in container
> Could not write header for output file #0 (incorrect codec parameters ?):
> Invalid argument

This message is about output, not input. Please test remuxing a single
file, without concat, using the first file in each sequence:

ffmpeg -i input.mp4 -c copy output.mp4


>> yes, its giving the same error...

If, as I suspect, it fails the same way, your problem is exactly what is
written in the second line of the error message.

Also, note that you posted on the wrong mailing-list. Adding ffmpeg-user
as recipient, please reply only there.

>> doing that, thanks,

Regards,

--
  Nicolas George

> Full command line:
>
> 
> ffmpeg -f concat -i mylist.txt -c copy op.mp4 -y
> ffmpeg version N-82760-g55affd9 Copyright (c) 2000-2016 the FFmpeg
> developers
>   built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
>   configuration: --pkg-config-flags=--static
> --extra-cflags=-I/home/ubuntu/ffmpeg_build/include
> --extra-ldflags=-L/home/ubuntu/ffmpeg_build/lib --enable-gpl
> --enable-libass --enable-libfdk-aac --enable-libfreetype
> --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis
> --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree
>   libavutil  55. 41.101 / 55. 41.101
>   libavcodec 57. 66.109 / 57. 66.109
>   libavformat57. 58.101 / 57. 58.101
>   libavdevice57.  2.100 / 57.  2.100
>   libavfilter 6. 68.100 /  6. 68.100
>   libswscale  4.  3.101 /  4.  3.101
>   libswresample   2.  4.100 /  2.  4.100
>   libpostproc54.  2.100 / 54.  2.100
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x23ed7c0] Auto-inserting h264_mp4toannexb
> bitstream filter
> Guessed Channel Layout for Input Stream #0.0 : mono
> Input #0, concat, from 'mylist.txt':
>   Duration: N/A, start: 0.00, bitrate: 518 kb/s
> Stream #0:0(eng): Audio: pcm_mulaw (ulaw / 0x77616C75), 8000 Hz, mono,
> s16, 64 kb/s
> Metadata:
>   creation_time   : 2016-12-20T03:41:14.00Z
>   handler_name: ?Apple Sound Media Handler
> Stream #0:1(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p,
> 1920x1080, 454 kb/s, 27.70 fps, 27.70 tbr, 90k tbn, 180k tbc
> Metadata:
>   creation_time   : 2016-12-20T03:41:14.00Z
>   handler_name: ?Apple Video Media Handler
>   encoder : H.264
> [mp4 @ 0x2413780] Could not find tag for codec pcm_mulaw in stream #1,
> codec not currently supported in container
> Could not write header for output file #0 (incorrect codec parameters ?):
> Invalid argument
> Stream mapping:
>   Stream #0:1 -> #0:0 (copy)
>   Stream #0:0 -> #0:1 (copy)
> Last message repeated 1 times
> 
>
> I'm on Ubuntu 14.04 OS.
>
> Is there anyway I can get this functionality to work with mp42 compatible
> files? any help is much appreciated,
>
> Thanks,

> ___
> Libav-user mailing list
> libav-u...@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user


___
Libav-user mailing list
libav-u...@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


signature.asc
Description: PGP signature
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] VAAPI decoding/encoding of several video streams in parallel

2016-12-20 Thread Anton Sviridenko
I want to use hardware acceleration for processing multiple realtime
videostreams (coming over RTSP).
I've read this thread -
http://ffmpeg.org/pipermail/ffmpeg-user/2016-December/034530.html

and have some questions related to scaling this everything to several
video streams:

1) Is it possible at all to use VAAPI acceleration for several
independent video streams simultaneously?

2) How should I initialize VAAPI related stuff? Do I have to create
separate hwframe context for each stream?

3) Can I use single hwdevice and vaapi_context instances for all
streams or there should be own instance for each decoded/encoded
stream?

Anton
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] aevalsrc question

2016-12-20 Thread Muhammad Faiz
On 12/20/16, Adam Puckett  wrote:
> On 12/20/16, Muhammad Faiz  wrote:
>> On 12/20/16, Adam Puckett  wrote:
>>> On 12/19/16, Nicolas George  wrote:
 Oh, good catch. I should have remembered this task needed a primitive
 function, not just a multiplication.

 Regards,

 --
   Nicolas George

>>> What do I need to do to make the formula right?
>>
>> Just do the reverse.
>> Given freq(t) = 262 * 2^(t/10)
>> w(t) = 2*PI * 262 * 2^(t/10)
>> ph(t) = integral of w(t) dt
>>   = 2*PI * 262 * 10/log(2) * 2^(t/10) + arbitrary constant
>>
>> Thx.
>> ___
>> ffmpeg-user mailing list
>> ffmpeg-user@ffmpeg.org
>> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>>
>> To unsubscribe, visit link above, or email
>> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
> Thanks, that worked! But the question is: why? I don't quite
> understand why I had to put in the log(2) expression.
>
> On a related note, I've looked at a formula that does  linear
> interpolation (one of the example scripts for Praat
> (http://praat.org/)), and there is a division by 2 in the script; is
> this for a similar reason? (For arbitrary targeted frequencies, I'm
> assuming I would have to use a log(highestfreq/lowestfreq) in place of
> the log(2)?)
>
> Thanks

It is calculus
2^(t/10) = exp(t/10 * log(2))
and integral exp(a * t) * dt = 1/a * exp(a * t) + arbitrary constant

thx
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] aevalsrc question

2016-12-20 Thread Adam Puckett
On 12/20/16, Muhammad Faiz  wrote:
> On 12/20/16, Adam Puckett  wrote:
>> On 12/19/16, Nicolas George  wrote:
>>> Oh, good catch. I should have remembered this task needed a primitive
>>> function, not just a multiplication.
>>>
>>> Regards,
>>>
>>> --
>>>   Nicolas George
>>>
>> What do I need to do to make the formula right?
>
> Just do the reverse.
> Given freq(t) = 262 * 2^(t/10)
> w(t) = 2*PI * 262 * 2^(t/10)
> ph(t) = integral of w(t) dt
>   = 2*PI * 262 * 10/log(2) * 2^(t/10) + arbitrary constant
>
> Thx.
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Thanks, that worked! But the question is: why? I don't quite
understand why I had to put in the log(2) expression.

On a related note, I've looked at a formula that does  linear
interpolation (one of the example scripts for Praat
(http://praat.org/)), and there is a division by 2 in the script; is
this for a similar reason? (For arbitrary targeted frequencies, I'm
assuming I would have to use a log(highestfreq/lowestfreq) in place of
the log(2)?)

Thanks
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] High memory usage when merging videos

2016-12-20 Thread Muhammad Faiz
On 12/20/16, Muhammad Faiz  wrote:
> On 12/20/16, Jonathan Girven  wrote:
>>> How if you move your multiple inputs into movie sources inside
>>> filter_complex?
>>
>> Sorry, I looked at the documentation here:
>>
>> https://ffmpeg.org/ffmpeg-filters.html#Video-Sources
>>
>> but couldn't see how to grab a file with this. Do you have an example?
>
> At this section: https://ffmpeg.org/ffmpeg-filters.html#Multimedia-Sources
>
> for example, from your filter_complex:
> "movie=movie0.mp4:s=dv+da [movie0_v][movie0_a];
> [movie0_v] setpts=PTS-STARTPTS[v0_trim0];
> [movie0_a] asetpts=PTS-STARTPTS[a0_trim0]; ..."
>
> Thank's
>

For safer approach, you may want separate video/audio in different movie sources
e.g:
"movie=movie0.mp4, setpts=PTS-STARTPTS [v0_trim0];
amovie=movie0.mp4, asetpts=PTS-STARTPTS [v1_trim0]; ... "

Thx
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] High memory usage when merging videos

2016-12-20 Thread Muhammad Faiz
On 12/20/16, Jonathan Girven  wrote:
>> How if you move your multiple inputs into movie sources inside
>> filter_complex?
>
> Sorry, I looked at the documentation here:
>
> https://ffmpeg.org/ffmpeg-filters.html#Video-Sources
>
> but couldn't see how to grab a file with this. Do you have an example?

At this section: https://ffmpeg.org/ffmpeg-filters.html#Multimedia-Sources

for example, from your filter_complex:
"movie=movie0.mp4:s=dv+da [movie0_v][movie0_a];
[movie0_v] setpts=PTS-STARTPTS[v0_trim0];
[movie0_a] asetpts=PTS-STARTPTS[a0_trim0]; ..."

Thank's
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] High memory usage when merging videos

2016-12-20 Thread Jonathan Girven
> How if you move your multiple inputs into movie sources inside filter_complex?

Sorry, I looked at the documentation here:

https://ffmpeg.org/ffmpeg-filters.html#Video-Sources

but couldn't see how to grab a file with this. Do you have an example?
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] High memory usage when merging videos

2016-12-20 Thread Jonathan Girven
> The changes I am working on would fix the "bufferqueue overflow,
> dropping" problems that occur with slightly unbalanced streams, for
> example audio streams with tiny frames where you can get a hundred or so
> frames in one go.

Thanks for clarifying that Nicolas.

> But it would not fix the really unbalanced cases, because nothing can.

Are you thinking that the processing task I am trying to perform is
simply too much for the device I am using? That might well be the
answer.

Naively I thought that, assuming the device could hold some number of
dependent frames in memory, a less powerful device would only take
more time to process a video, not trigger the OS to kill it. Obviously
it is much more complicated than that, but still << 1GB RAM on a Moto
G. FFmpeg using all the memory available to it isn't necessarily a bad
thing. My problem is Android interpreting that as a reason to kill the
task.

I have tried many similar commands to narrow down what is happening.
Just re-encoding a 2 minute video on the device doesn't get killed by
the OS, e.g:

ffmpeg -i input.mp4 -vcodec libx264 -preset faster output.mp4

Even though that is significantly more video and takes more time than
the previously mentioned command. Similarly for trimming with "-ss"
and "-t". Therefore my complex filter usage must be related to the OS
killing the task. For both commands, FFmpeg quotes the speed at ~0.2x,
so the processing time must be limited by the decoding or encoding
rate.

I am still working on trying to get something more specific than that,
but it is difficult because running the same command twice does not
consistently get killed or succeed. I suppose the first iteration
might succeed but leave resources allocated (in my app or FFmpeg), so
the second iteration is killed. For reference, the memory monitor of
my app hovers around 40-60MB (but does not show FFmpeg's memory
usage).

Always open to new suggestions! Or perhaps hiring an FFmpeg developer
is the way forward.
Thanks,
Jon.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] MP4 concatenation with MP42 major_brand

2016-12-20 Thread Nicolas George
Le decadi 30 frimaire, an CCXXV, black copper a écrit :
> I have MP4 source videos, that contain:
> 
> video: H264
> audio: G.711 (mulaw)
> 
> I want to use concatenation on source videos to get one MP4 file.
> 
> I used concat filter by first creating a list: mylist.txt file as follows:
> 
> file 'v1.mp4'
> file 'v2.mp4'
> file 'v3.mp4'
> 
> then envoked ffmpeg command like this:
> 
> ffmpeg -f concat -i mylist.txt -c copy op.mp4 -y
> 
> This worked only with files that are mp41 compatible.
> 
> With mp42 files that I mentioned above, I get this error:
> 
> [mp4 @ 0x24ca780] Could not find tag for codec pcm_mulaw in stream #1,
> codec not currently supported in container
> Could not write header for output file #0 (incorrect codec parameters ?):
> Invalid argument

This message is about output, not input. Please test remuxing a single
file, without concat, using the first file in each sequence:

ffmpeg -i input.mp4 -c copy output.mp4

If, as I suspect, it fails the same way, your problem is exactly what is
written in the second line of the error message.

Also, note that you posted on the wrong mailing-list. Adding ffmpeg-user
as recipient, please reply only there.

Regards,

-- 
  Nicolas George

> Full command line:
> 
> 
> ffmpeg -f concat -i mylist.txt -c copy op.mp4 -y
> ffmpeg version N-82760-g55affd9 Copyright (c) 2000-2016 the FFmpeg
> developers
>   built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
>   configuration: --pkg-config-flags=--static
> --extra-cflags=-I/home/ubuntu/ffmpeg_build/include
> --extra-ldflags=-L/home/ubuntu/ffmpeg_build/lib --enable-gpl
> --enable-libass --enable-libfdk-aac --enable-libfreetype
> --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis
> --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree
>   libavutil  55. 41.101 / 55. 41.101
>   libavcodec 57. 66.109 / 57. 66.109
>   libavformat57. 58.101 / 57. 58.101
>   libavdevice57.  2.100 / 57.  2.100
>   libavfilter 6. 68.100 /  6. 68.100
>   libswscale  4.  3.101 /  4.  3.101
>   libswresample   2.  4.100 /  2.  4.100
>   libpostproc54.  2.100 / 54.  2.100
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x23ed7c0] Auto-inserting h264_mp4toannexb
> bitstream filter
> Guessed Channel Layout for Input Stream #0.0 : mono
> Input #0, concat, from 'mylist.txt':
>   Duration: N/A, start: 0.00, bitrate: 518 kb/s
> Stream #0:0(eng): Audio: pcm_mulaw (ulaw / 0x77616C75), 8000 Hz, mono,
> s16, 64 kb/s
> Metadata:
>   creation_time   : 2016-12-20T03:41:14.00Z
>   handler_name: ?Apple Sound Media Handler
> Stream #0:1(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p,
> 1920x1080, 454 kb/s, 27.70 fps, 27.70 tbr, 90k tbn, 180k tbc
> Metadata:
>   creation_time   : 2016-12-20T03:41:14.00Z
>   handler_name: ?Apple Video Media Handler
>   encoder : H.264
> [mp4 @ 0x2413780] Could not find tag for codec pcm_mulaw in stream #1,
> codec not currently supported in container
> Could not write header for output file #0 (incorrect codec parameters ?):
> Invalid argument
> Stream mapping:
>   Stream #0:1 -> #0:0 (copy)
>   Stream #0:0 -> #0:1 (copy)
> Last message repeated 1 times
> 
> 
> I'm on Ubuntu 14.04 OS.
> 
> Is there anyway I can get this functionality to work with mp42 compatible
> files? any help is much appreciated,
> 
> Thanks,

> ___
> Libav-user mailing list
> libav-u...@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/libav-user



signature.asc
Description: Digital signature
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".