On Fri, 9 Dec 2016, Michael Niedermayer wrote:

On Thu, Dec 08, 2016 at 09:47:53PM +0100, Nicolas George wrote:
L'octidi 18 frimaire, an CCXXV, Michael Niedermayer a écrit :
A. Is a heap limit for av_*alloc*() acceptable ?
B. Are case based limits acceptable ?

No. This is the task of the operating system.


also even if C is choosen, a small set of limits on the main parameters
still is needed to allow efficient fuzzing, all issues reported
by oss-fuzz recently are "hangs" due to slow decoding,

Then set a limit at the operating system level.

You are misunderstanding the problem i think

The goal of a fuzzer is to find bugs, crashes, undefined, bad things,
OOM, hangs.

If the code under test can allocate arbitrary amounts of memory and
take arbitrary amounts of time in a significant number of non-bug
cases then the fuzzer cannot reliably find the corresponding bugs.

moving the threshold of where to declare something OOM or hang around
will not solve this.
blocking high resolution, high channel count, high stream count
cases OTOH should improve the rate of false positives.

Then you should run the fuzzer with the limits you find optimal, no?

Reducing the defaults which causes rare-but-existing files to stop working does not make much sense to me. I particularly don't like the stream count limits, because parsing possibly corrupted sources (e.g. mpegts) can easily generate a high number of mostly harmless empty streams as far as I remember.

It just might make more sense to create a section in the documentation or a wiki page which describes that if you are working with untrusted files, you should use a sandbox, use system resource limits, and you might use these options as well.

If we still want a default limit, that limit should be IMHO _insanely_ high, tens of thousands, not hundreds.

Regards,
Marton
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to