Re: [FFmpeg-devel] [PATCH 2/2] qt-faststart - optimize the offset change loop
> > > -Original Message- > From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of > Michael Niedermayer > Sent: Wednesday, May 30, 2018 12:37 AM > To: FFmpeg development discussions and patches > Subject: Re: [FFmpeg-devel] [PATCH 2/2] qt-faststart - optimize the offset > change loop > > > +*ptr++ = (current_offset >> 56) & 0xFF; > > +*ptr++ = (current_offset >> 48) & 0xFF; > > +*ptr++ = (current_offset >> 40) & 0xFF; > > +*ptr++ = (current_offset >> 32) & 0xFF; > > +*ptr++ = (current_offset >> 24) & 0xFF; > > +*ptr++ = (current_offset >> 16) & 0xFF; > > +*ptr++ = (current_offset >> 8) & 0xFF; > > +*ptr++ = (current_offset >> 0) & 0xFF; > > can this be simplfified with > libavcodec/bytestream.h, libavutil/intreadwrite.h or similar ? > > [...] > Yes, I can change it to AV_WB32/AV_WB64, but at the moment this utility is completely stand-alone - it does not depend on anything from ffmpeg, so maybe it's better to keep it this way. Thanks Eran > > -- > Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB > > Modern terrorism, a quick summary: Need oil, start war with country that has > oil, kill hundread thousand in war. Let country fall into chaos, be surprised > about raise of fundamantalists. Drop more bombs, kill more people, be > surprised about them taking revenge and drop even more bombs and strip your > own citizens of their rights and freedoms. to be continued > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] doc/ffmpeg - rewrite Stream Selection chapter
On 30-05-2018 04:57 AM, Carl Eugen Hoyos wrote: 2018-05-27 6:16 GMT+02:00, Gyan Doshi : v2 attached. +In the absence of any map options for a particular output file, ffmpeg inspects the output +format to check which type of streams can be included in it, viz. video, audio and/or Sorry, what is "viz."? "Namely". Commonly seen in English prose. Can change it to 'i.e.' which is less correct here. +subtitles. For each acceptable stream type, ffmpeg will pick one stream, when available, +from among all the inputs. I don't think this is correct, not every stream type is picked. Or do I misunderstand? Yes. The qualifier is at the start, "For each acceptable stream type" >> +It will select that stream based upon the following criteria: +@* +@*for video, it is the stream with the highest resolution, +@*for audio, it is the stream with the most channels, +@*for subtitles, it is the first subtitle stream Please remove the actual current criteria: This is just the current state of the implementation, for one of the above, this is obviously not a good choice, for the others, we could find better criteria. Or mention that they may all change at any point. These have been the criteria for nearly 7 years now. The narrowing of the subtitle selection was added by you nearly 4 years ago. This is one of the parts I copied from the current version, since it remains valid. +The output format's default subtitle encoder may be text-based or image-based, and only a +subtitle stream of the same type can be chosen. I wish that were true but I fear it isn't;-( Please test. The 2nd example demonstrates it. It's your logic - you authored & committed it. (`dvb teletext` is an exception since it no prop flags set.) +In the case where several streams of the same type rate equally, the stream with the lowest +index is chosen. Please remove this. Why? Another part, copied from the original. Remains valid All-in-all, this is far too complicated imo. The _implementation_ is complicated. The docs now reflect it. The basic principle I'm aiming to follow for docs, even if execution remains uneven, is If a user consults the relevant parts of the documentation before execution, they should be able to predict how the program will behave. If they do it afterwards, they should understand what the program did. Even though FFmpeg is an open source project, end users of the CLI tools aren't expected to understand or dive into the source to grasp how the program behaves. It's the job of the docs to convey descriptions of behaviour that will affect what the end user expects the program to do. Do you disagree? Regards, Gyan ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH 2/2] flvenc: Fix sequence header update timestamps
On Sun, May 13, 2018 at 3:24 AM, Michael Niedermayer wrote: > On Thu, May 10, 2018 at 06:40:08PM -0700, Alex Converse wrote: >> From: Alex Converse >> >> --- >> libavformat/flvenc.c | 9 - >> 1 file changed, 4 insertions(+), 5 deletions(-) >> >> diff --git a/libavformat/flvenc.c b/libavformat/flvenc.c >> index 9b7cdfe7db..7aa2dbf9a6 100644 >> --- a/libavformat/flvenc.c >> +++ b/libavformat/flvenc.c >> @@ -485,7 +485,7 @@ static int unsupported_codec(AVFormatContext *s, >> return AVERROR(ENOSYS); >> } >> >> -static void flv_write_codec_header(AVFormatContext* s, AVCodecParameters* >> par) { >> +static void flv_write_codec_header(AVFormatContext* s, AVCodecParameters* >> par, unsigned ts) { > > It seems jeeb prefers int64_t here instead of unsigned. > Can you change that before pushing ? > Pushed with requested changes. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [GSOC] [PATCH] DNN module introduction and SRCNN filter update
For the case that ffmpeg is built with TENSORFLOW_BACKEND enabled, while there is no TF at runtime, ff_get_dnn_module always returns valid pointer, and no chance for the filter to fall back to native mode. Looks that we need a runtime check in function ff_get_dnn_module. static av_cold int init(AVFilterContext* context) { SRCNNContext* srcnn_context = context->priv; #ifdef TENSORFLOW_BACKEND srcnn_context->dnn_module = ff_get_dnn_module(DNN_TF); if (!srcnn_context->dnn_module){ av_log(context, AV_LOG_INFO, "could not load tensorflow backend, using native backend instead\n"); srcnn_context->dnn_module = ff_get_dnn_module(DNN_NATIVE); } #else srcnn_context->dnn_module = ff_get_dnn_module(DNN_NATIVE); #endif DNNModule* ff_get_dnn_module(DNNBackendType backend_type) { ... case DNN_TF: #ifdef TENSORFLOW_BACKEND //add a runtime check here, possible? dnn_module->load_model = _dnn_load_model_tf; dnn_module->load_default_model = _dnn_load_default_model_tf; dnn_module->execute_model = _dnn_execute_model_tf; dnn_module->free_model = _dnn_free_model_tf; #else av_freep(dnn_module); return NULL; #endif -Original Message- From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of Pedro Arthur Sent: Tuesday, May 29, 2018 8:45 PM To: Sergey Lavrushkin Cc: FFmpeg development discussions and patches Subject: Re: [FFmpeg-devel] [GSOC] [PATCH] DNN module introduction and SRCNN filter update 2018-05-29 5:14 GMT-03:00 Sergey Lavrushkin : > 2018-05-29 4:08 GMT+03:00 Pedro Arthur : >> >> 2018-05-28 19:52 GMT-03:00 Sergey Lavrushkin : >> > 2018-05-28 9:32 GMT+03:00 Guo, Yejun : >> > >> >> looks that no tensorflow dependency is introduced, a new model >> >> format is >> >> created together with some CPU implementation for inference. With >> >> this >> >> idea, Android Neural Network would be a very good reference, see >> >> https://developer.android.google.cn/ndk/guides/neuralnetworks/. It >> >> defines how the model is organized, and also provided a CPU >> >> optimized inference implementation (within the NNAPI runtime, it is open >> >> source). >> >> It >> >> is still under development but mature enough to run some popular >> >> dnn models with proper performance. We can absorb some basic >> >> design. Anyway, just a reference fyi. (btw, I'm not sure about >> >> any IP issue) >> >> >> > >> > The idea was to first introduce something to use when tensorflow is >> > not available. Here is another patch, that introduces tensorflow backend. >> I think it would be better for reviewing if you send the second patch >> in a new email. > > > Then we need to push the first patch, I think. Not necessarily, 'git send-email' may give you a glimpse of how it is done. > >> >> > >> > >> >> For this patch, I have two comments. >> >> >> >> 1. change from "DNNModel* (*load_default_model)(DNNDefaultModel >> >> model_type);" to " DNNModel* (*load_builtin_model)(DNNBuiltinModel >> >> model_type);" >> >> The DNNModule can be invoked by many filters, default model is a >> >> good name at the filter level, while built-in model is better >> >> within the DNN scope. >> >> >> >> typedef struct DNNModule{ >> >> // Loads model and parameters from given file. Returns NULL if >> >> it is not possible. >> >> DNNModel* (*load_model)(const char* model_filename); >> >> // Loads one of the default models >> >> DNNModel* (*load_default_model)(DNNDefaultModel model_type); >> >> // Executes model with specified input and output. Returns >> >> DNN_ERROR otherwise. >> >> DNNReturnType (*execute_model)(const DNNModel* model); >> >> // Frees memory allocated for model. >> >> void (*free_model)(DNNModel** model); } DNNModule; >> >> >> >> >> >> 2. add a new variable 'number' for DNNData/InputParams As a >> >> typical DNN concept, the data shape usually is: > >> width, channel> or , the last >> >> component denotes its index changes the fastest in the memory. We >> >> can add this concept into the API, and decide to support or >> >> or both. >> > >> > >> > I did not add number of elements in batch because I thought, that >> > we would not feed more than one element at once to a network in a >> > ffmpeg filter. >> > But it can be easily added if necessary. >> > >> > So here is the patch that adds tensorflow backend with the previous >> > patch. >> > I forgot to change include guards from AVUTIL_* to AVFILTER_* in it. >> You moved the files from libavutil to libavfilter while it was >> proposed to move them to libavformat. > > > Not only, it was also proposed to move it to libavfilter if it is > going to be used only in filters. I do not know if this module is > useful anywhere else besides libavfilter. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] avformat/flvenc: Avoid truncating timestamp before avio_write_marker()
On Sun, May 13, 2018 at 3:41 AM, Michael Niedermayer wrote: > Signed-off-by: Michael Niedermayer > --- > libavformat/flvenc.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/libavformat/flvenc.c b/libavformat/flvenc.c > index e8af48cb64..168ff9ffb8 100644 > --- a/libavformat/flvenc.c > +++ b/libavformat/flvenc.c > @@ -873,7 +873,7 @@ static int flv_write_packet(AVFormatContext *s, AVPacket > *pkt) > AVCodecParameters *par = s->streams[pkt->stream_index]->codecpar; > FLVContext *flv = s->priv_data; > FLVStreamContext *sc = s->streams[pkt->stream_index]->priv_data; > -unsigned ts; > +int64_t ts; > int size = pkt->size; > uint8_t *data = NULL; > int flags = -1, flags_size, ret; > -- > 2.17.0 The put_avc_eos_tag() function signature applies a similar truncation. Best to be consistent. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v3 1/3] lavc,doc: add avs2 codec
Signed-off-by: hwren --- doc/APIchanges | 3 +++ libavcodec/avcodec.h| 1 + libavcodec/codec_desc.c | 7 +++ libavcodec/version.h| 4 ++-- 4 files changed, 13 insertions(+), 2 deletions(-) diff --git a/doc/APIchanges b/doc/APIchanges index efe15ba..3d08bb9 100644 --- a/doc/APIchanges +++ b/doc/APIchanges @@ -15,6 +15,9 @@ libavutil: 2017-10-21 API changes, most recent first: +2018-05-xx - xx - lavc 58.20.100 - avcodec.h + Add AV_CODEC_ID_AVS2. + 2018-05-xx - xx - lavf 58.15.100 - avformat.h Add pmt_version field to AVProgram diff --git a/libavcodec/avcodec.h b/libavcodec/avcodec.h index fb0c6fa..ce5f307 100644 --- a/libavcodec/avcodec.h +++ b/libavcodec/avcodec.h @@ -409,6 +409,7 @@ enum AVCodecID { AV_CODEC_ID_DXV, AV_CODEC_ID_SCREENPRESSO, AV_CODEC_ID_RSCC, +AV_CODEC_ID_AVS2, AV_CODEC_ID_Y41P = 0x8000, AV_CODEC_ID_AVRP, diff --git a/libavcodec/codec_desc.c b/libavcodec/codec_desc.c index 79552a9..e85492e 100644 --- a/libavcodec/codec_desc.c +++ b/libavcodec/codec_desc.c @@ -1395,6 +1395,13 @@ static const AVCodecDescriptor codec_descriptors[] = { .props = AV_CODEC_PROP_LOSSLESS, }, { +.id= AV_CODEC_ID_AVS2, +.type = AVMEDIA_TYPE_VIDEO, +.name = "avs2", +.long_name = NULL_IF_CONFIG_SMALL("AVS2/IEEE 1857.4"), +.props = AV_CODEC_PROP_LOSSY, +}, +{ .id= AV_CODEC_ID_Y41P, .type = AVMEDIA_TYPE_VIDEO, .name = "y41p", diff --git a/libavcodec/version.h b/libavcodec/version.h index f65346a..b9752ce 100644 --- a/libavcodec/version.h +++ b/libavcodec/version.h @@ -28,8 +28,8 @@ #include "libavutil/version.h" #define LIBAVCODEC_VERSION_MAJOR 58 -#define LIBAVCODEC_VERSION_MINOR 19 -#define LIBAVCODEC_VERSION_MICRO 104 +#define LIBAVCODEC_VERSION_MINOR 20 +#define LIBAVCODEC_VERSION_MICRO 100 #define LIBAVCODEC_VERSION_INT AV_VERSION_INT(LIBAVCODEC_VERSION_MAJOR, \ LIBAVCODEC_VERSION_MINOR, \ -- 2.7.4 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v3 3/3] lavf: add avs2 fourcc
Signed-off-by: hwren --- libavformat/riff.c | 1 + 1 file changed, 1 insertion(+) diff --git a/libavformat/riff.c b/libavformat/riff.c index 8911725..4153372 100644 --- a/libavformat/riff.c +++ b/libavformat/riff.c @@ -369,6 +369,7 @@ const AVCodecTag ff_codec_bmp_tags[] = { { AV_CODEC_ID_ZMBV, MKTAG('Z', 'M', 'B', 'V') }, { AV_CODEC_ID_KMVC, MKTAG('K', 'M', 'V', 'C') }, { AV_CODEC_ID_CAVS, MKTAG('C', 'A', 'V', 'S') }, +{ AV_CODEC_ID_AVS2, MKTAG('A', 'V', 'S', '2') }, { AV_CODEC_ID_JPEG2000, MKTAG('m', 'j', 'p', '2') }, { AV_CODEC_ID_JPEG2000, MKTAG('M', 'J', '2', 'C') }, { AV_CODEC_ID_JPEG2000, MKTAG('L', 'J', '2', 'C') }, -- 2.7.4 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v3 2/3] lavc, doc, configure: add libdavs2 video decoder
Add avs2 video decoder via libdavs2 library. Signed-off-by: hwren --- Changelog | 1 + configure | 4 + doc/decoders.texi | 10 +++ doc/general.texi | 8 ++ libavcodec/Makefile| 1 + libavcodec/allcodecs.c | 1 + libavcodec/libdavs2.c | 204 + 7 files changed, 229 insertions(+) create mode 100644 libavcodec/libdavs2.c diff --git a/Changelog b/Changelog index 3d25564..ce1f97c 100644 --- a/Changelog +++ b/Changelog @@ -9,6 +9,7 @@ version : - aderivative and aintegral audio filters - pal75bars and pal100bars video filter sources - support mbedTLS based TLS +- AVS2 video decoder via libdavs2 version 4.0: diff --git a/configure b/configure index 22eeca2..3c9129f 100755 --- a/configure +++ b/configure @@ -226,6 +226,7 @@ External library support: --enable-libcelt enable CELT decoding via libcelt [no] --enable-libcdio enable audio CD grabbing with libcdio [no] --enable-libcodec2 enable codec2 en/decoding using libcodec2 [no] + --enable-libdavs2enable AVS2 decoding via libdavs2 [no] --enable-libdc1394 enable IIDC-1394 grabbing using libdc1394 and libraw1394 [no] --enable-libfdk-aac enable AAC de/encoding via libfdk-aac [no] @@ -1636,6 +1637,7 @@ EXTERNAL_LIBRARY_GPL_LIST=" avisynth frei0r libcdio +libdavs2 librubberband libvidstab libx264 @@ -3042,6 +3044,7 @@ libaom_av1_encoder_deps="libaom" libcelt_decoder_deps="libcelt" libcodec2_decoder_deps="libcodec2" libcodec2_encoder_deps="libcodec2" +libdavs2_decoder_deps="libdavs2" libfdk_aac_decoder_deps="libfdk_aac" libfdk_aac_encoder_deps="libfdk_aac" libfdk_aac_encoder_select="audio_frame_queue" @@ -5992,6 +5995,7 @@ enabled libcelt && require libcelt celt/celt.h celt_decode -lcelt0 && die "ERROR: libcelt must be installed and version must be >= 0.11.0."; } enabled libcaca && require_pkg_config libcaca caca caca.h caca_create_canvas enabled libcodec2 && require libcodec2 codec2/codec2.h codec2_create -lcodec2 +enabled libdavs2 && require_pkg_config libdavs2 "davs2 >= 1.2.34" davs2.h davs2_decoder_decode enabled libdc1394 && require_pkg_config libdc1394 libdc1394-2 dc1394/dc1394.h dc1394_new enabled libdrm&& require_pkg_config libdrm libdrm xf86drm.h drmGetVersion enabled libfdk_aac&& { check_pkg_config libfdk_aac fdk-aac "fdk-aac/aacenc_lib.h" aacEncOpen || diff --git a/doc/decoders.texi b/doc/decoders.texi index a551d5d..f9d1b78 100644 --- a/doc/decoders.texi +++ b/doc/decoders.texi @@ -47,6 +47,16 @@ top-field-first is assumed @end table +@section libdavs2 + +AVS2/IEEE 1857.4 video decoder wrapper. + +This decoder allows libavcodec to decode AVS2 streams with libdavs2 library. +Using it requires the presence of the libdavs2 headers and library during +configuration. You need to explicitly configure the build with @code{--enable-libdavs2}. + +libdavs2 uses GPLv2, so you may also need to add @code{--enable-gpl} while configuring. + @c man end VIDEO DECODERS @chapter Audio Decoders diff --git a/doc/general.texi b/doc/general.texi index 2583006..d3c1503 100644 --- a/doc/general.texi +++ b/doc/general.texi @@ -17,6 +17,14 @@ for more formats. None of them are used by default, their use has to be explicitly requested by passing the appropriate flags to @command{./configure}. +@section libdavs2 + +FFmpeg can make use of the libdavs2 library for AVS2 decoding. + +Go to @url{https://github.com/pkuvcl/davs2} and follow the instructions for +installing the library. Then pass @code{--enable-libdavs2} to configure to +enable it. + @section Alliance for Open Media libaom FFmpeg can make use of the libaom library for AV1 decoding. diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 3ab071a..2a845f1 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -944,6 +944,7 @@ OBJS-$(CONFIG_LIBAOM_AV1_ENCODER) += libaomenc.o OBJS-$(CONFIG_LIBCELT_DECODER)+= libcelt_dec.o OBJS-$(CONFIG_LIBCODEC2_DECODER) += libcodec2.o codec2utils.o OBJS-$(CONFIG_LIBCODEC2_ENCODER) += libcodec2.o codec2utils.o +OBJS-$(CONFIG_LIBDAVS2_DECODER) += libdavs2.o OBJS-$(CONFIG_LIBFDK_AAC_DECODER) += libfdk-aacdec.o OBJS-$(CONFIG_LIBFDK_AAC_ENCODER) += libfdk-aacenc.o OBJS-$(CONFIG_LIBGSM_DECODER) += libgsmdec.o diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c index 90d170b..a59b601 100644 --- a/libavcodec/allcodecs.c +++ b/libavcodec/allcodecs.c @@ -667,6 +667,7 @@ extern AVCodec ff_libaom_av1_encoder; extern AVCodec ff_libcelt_decoder; extern AVCodec ff_libcodec2_encoder; extern AVCodec ff_libcodec2_decoder; +extern AVCodec ff_libdavs2_decoder; extern AVCodec ff_libfdk_aac_encoder; extern AVCodec ff_libfdk_aac_decoder; extern AVCodec
Re: [FFmpeg-devel] [PATCH] mov: Make sure PTS are both monotonically increasing, and unique
On Tue, May 29, 2018 at 07:20:33PM +0100, Derek Buitenhuis wrote: > Hi, > > On Tue, May 29, 2018 at 5:04 PM, Sasi Inguva > wrote: > > Hi. sorry for the late reply. I sent a patch similar to this a while back > > https://patchwork.ffmpeg.org/patch/8227/ but it got lost in the sea. You > > also want to do, > > Sorry I missed that! I'd prefer to use your patch over mine. iam fine with sasis original patch too [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB It is what and why we do it that matters, not just one of them. signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] lavfi/tests: Fix 16-bit vf_blend test to avoid memory not aligned to 2 bytes
On Tue, May 29, 2018 at 02:41:33PM +0200, Paul B Mahol wrote: > On 5/29/18, Andrey Semashev wrote: > > On 05/24/18 00:07, Andrey Semashev wrote: > >> Generic C implementation of vf_blend performs reads and writes of 16-bit > >> elements, which requires the buffers to be aligned to at least 2-byte > >> boundary. > >> > >> Also, the change fixes source buffer overrun caused by src_offset being > >> added to to test handling of misaligned buffers. > >> > >> Fixes: #7226 > > > > Ping? Any comments? > > Should be OK. will apply unless someone else does before thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Dictatorship: All citizens are under surveillance, all their steps and actions recorded, for the politicians to enforce control. Democracy: All politicians are under surveillance, all their steps and actions recorded, for the citizens to enforce control. signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH 1/2] qt-faststart - stricter input validations
On Tue, May 29, 2018 at 02:35:23PM +, Eran Kornblau wrote: > Hi, > > The attached patch fixes a couple of input validation issues in fast start > that I noticed while going over the code > > Thanks > > Eran > qt-faststart.c | 13 + > 1 file changed, 9 insertions(+), 4 deletions(-) > 1dad4dfcdd67328ed163440550917a3f8fdcb40d > 0001-qt-faststart-stricter-input-validations.patch > From 26ef40268fce426eea608400f81cf2e4d413fca5 Mon Sep 17 00:00:00 2001 > From: erankor > Date: Tue, 29 May 2018 16:18:05 +0300 > Subject: [PATCH 1/2] qt-faststart - stricter input validations > > 1. validate the moov size before checking for cmov atom > 2. avoid performing arithmetic operations on unvalidated numbers > 3. verify the stco/co64 offset count does not overflow the stco/co64 > atom (not only the moov atom) > --- > tools/qt-faststart.c | 13 + > 1 file changed, 9 insertions(+), 4 deletions(-) will apply thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB When you are offended at any man's fault, turn to yourself and study your own failings. Then you will forget your anger. -- Epictetus signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] avcodec/qtrle: Do not output duplicated frames on insufficient input
On Sun, May 27, 2018 at 09:59:58PM +0200, Michael Niedermayer wrote: > This improves performance and makes qtrle behave more similar to other > decoders. > Libavcodec does generally not output known duplicated frames, instead the > calling Application > can insert them as it needs. > > Fixes: Timeout > Fixes: > 6383/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_QTRLE_fuzzer-6199846902956032 > > Found-by: continuous fuzzing process > https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg > Signed-off-by: Michael Niedermayer > --- > libavcodec/qtrle.c| 12 ++--- > tests/ref/fate/qtrle-8bit | 109 -- > 2 files changed, 6 insertions(+), 115 deletions(-) will apply [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB The worst form of inequality is to try to make unequal things equal. -- Aristotle signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] doc/ffmpeg - rewrite Stream Selection chapter
2018-05-27 6:16 GMT+02:00, Gyan Doshi : > v2 attached. > +In the absence of any map options for a particular output file, ffmpeg > inspects the output > +format to check which type of streams can be included in it, viz. video, > audio and/or Sorry, what is "viz."? > +subtitles. For each acceptable stream type, ffmpeg will pick one stream, > when available, > +from among all the inputs. I don't think this is correct, not every stream type is picked. Or do I misunderstand? > +It will select that stream based upon the following criteria: > +@* > +@*for video, it is the stream with the highest resolution, > +@*for audio, it is the stream with the most channels, > +@*for subtitles, it is the first subtitle stream Please remove the actual current criteria: This is just the current state of the implementation, for one of the above, this is obviously not a good choice, for the others, we could find better criteria. Or mention that they may all change at any point. > +The output format's default subtitle encoder may be text-based or > image-based, and only a > +subtitle stream of the same type can be chosen. I wish that were true but I fear it isn't;-( [...] > +In the case where several streams of the same type rate equally, the stream > with the > lowest > +index is chosen. Please remove this. All-in-all, this is far too complicated imo. Please resend, Carl Eugen ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH]ffplay: Mention codec_name if decoder for codec_id could not be found.
2018-05-29 9:54 GMT+02:00, Marton Balint : > > > On Tue, 29 May 2018, Carl Eugen Hoyos wrote: > >> Hi! >> >> Attached patch makes debugging a little easier imo. >> >> Please comment, Carl Eugen >> >> diff --git a/fftools/ffplay.c b/fftools/ffplay.c >> index dcca9c2..f9571d7 100644 >> --- a/fftools/ffplay.c >> +++ b/fftools/ffplay.c >> @@ -2578,7 +2578,7 @@ static int stream_component_open(VideoState *is, int >> stream_index) >> if (forced_codec_name) av_log(NULL, AV_LOG_WARNING, >> "No codec could be found with name >> '%s'\n", forced_codec_name); >> else av_log(NULL, AV_LOG_WARNING, >> - "No codec could be found with id >> %d\n", avctx->codec_id); >> + "No codec could be found with id %d >> (%s)\n", avctx->codec_id, avcodec_get_name(avctx->codec_id)); > > Maybe go one step further, and change the error message to > > "No decoder could be found for codec %s\n", > avcodec_get_name(avctx->codec_id) Patch applied. Thank you, Carl Eugen ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] doc/ffmpeg - rewrite Stream Selection chapter
On Sun, 27 May 2018 09:46:46 +0530 Gyan Doshi wrote: > I was talking about the vertical margins appled to the 'code' element. That's ugly too. I removed the margin for the code element. > From 60ed76348e70f1b0a25eadde8d886d47be3fca69 Mon Sep 17 00:00:00 2001 > From: Gyan Doshi > Date: Thu, 24 May 2018 19:11:00 +0530 > Subject: [PATCH v2] doc/ffmpeg - rewrite Stream Selection chapter The subject nit still exists. doc/ffmpeg: rewrite Stream Selection chapter > Flesh out with details and examples to show quirks and limitations. > --- > doc/ffmpeg.texi | 187 > +--- > 1 file changed, 177 insertions(+), 10 deletions(-) > > diff --git a/doc/ffmpeg.texi b/doc/ffmpeg.texi > index 88dbdeb95a..803490ce7b 100644 > --- a/doc/ffmpeg.texi > +++ b/doc/ffmpeg.texi [...] > +It will select that stream based upon the following criteria: > +@* > +@*for video, it is the stream with the highest resolution, > +@*for audio, it is the stream with the most channels, > +@*for subtitles, it is the first subtitle stream found but there's a caveat. > +The output format's default subtitle encoder may be text-based or > image-based, and only a > +subtitle stream of the same type can be chosen. Using an itemized list here will look better in my opinion. @itemize @item for video, it is the stream with the highest resolution, @item ... @end itemize [...] > +@subsubheading Example: automatic stream selection The subsubheadings did not render in the HTML for me, but they did in man. I didn't investigate why. [...] > +@subsubheading Example: unlabeled filtergraph outputs Trailing whitespace. I'm not convinced a verbose, tutorial-style set of examples belongs here. I tend to put such things in the wiki, but if you think otherwise that's fine. That's all of my comments. Everything else LGTM. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH] lavf/mov.c: Set st->start_time for video streams explicitly.
If start_time is not set, ffmpeg takes the duration from the global movie instead of the per stream duration. Signed-off-by: Sasi Inguva --- libavformat/mov.c| 20 +--- tests/fate/mov.mak | 4 +++ tests/ref/fate/mov-neg-firstpts-discard | 2 +- tests/ref/fate/mov-stream-shorter-than-movie | 33 4 files changed, 54 insertions(+), 5 deletions(-) create mode 100644 tests/ref/fate/mov-stream-shorter-than-movie diff --git a/libavformat/mov.c b/libavformat/mov.c index f2a540ad50..1915be5fb5 100644 --- a/libavformat/mov.c +++ b/libavformat/mov.c @@ -3672,11 +3672,15 @@ static void mov_fix_index(MOVContext *mov, AVStream *st) // If the minimum pts turns out to be greater than zero after fixing the index, then we subtract the // dts by that amount to make the first pts zero. -if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO && msc->min_corrected_pts > 0) { -av_log(mov->fc, AV_LOG_DEBUG, "Offset DTS by %"PRId64" to make first pts zero.\n", msc->min_corrected_pts); -for (i = 0; i < st->nb_index_entries; ++i) { -st->index_entries[i].timestamp -= msc->min_corrected_pts; +if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) { +if (msc->min_corrected_pts > 0) { +av_log(mov->fc, AV_LOG_DEBUG, "Offset DTS by %"PRId64" to make first pts zero.\n", msc->min_corrected_pts); +for (i = 0; i < st->nb_index_entries; ++i) { +st->index_entries[i].timestamp -= msc->min_corrected_pts; +} } +// Start time should be equal to zero or the duration of any empty edits. +st->start_time = empty_edits_sum_duration; } // Update av stream length, if it ends up shorter than the track's media duration @@ -4012,6 +4016,14 @@ static void mov_build_index(MOVContext *mov, AVStream *st) mov_fix_index(mov, st); } +// Update start time of the stream. +if (st->start_time == AV_NOPTS_VALUE && st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO && st->nb_index_entries > 0) { +st->start_time = st->index_entries[0].timestamp + sc->dts_shift; +if (sc->ctts_data) { +st->start_time += sc->ctts_data[0].duration; +} +} + mov_estimate_video_delay(mov, st); } diff --git a/tests/fate/mov.mak b/tests/fate/mov.mak index eadee3abfa..c1d399e5c0 100644 --- a/tests/fate/mov.mak +++ b/tests/fate/mov.mak @@ -16,6 +16,7 @@ FATE_MOV = fate-mov-3elist \ fate-mov-frag-overlap \ fate-mov-bbi-elst-starts-b \ fate-mov-neg-firstpts-discard-frames \ + fate-mov-stream-shorter-than-movie \ FATE_MOV_FFPROBE = fate-mov-neg-firstpts-discard \ fate-mov-aac-2048-priming \ @@ -88,6 +89,9 @@ fate-mov-neg-firstpts-discard: CMD = run ffprobe$(PROGSSUF)$(EXESUF) -show_entri # Makes sure that expected frames are generated for mov_neg_first_pts_discard.mov with -vsync 1 fate-mov-neg-firstpts-discard-frames: CMD = framemd5 -flags +bitexact -i $(TARGET_SAMPLES)/mov/mov_neg_first_pts_discard.mov -vsync 1 +# Makes sure that no frame is dropped/duplicated with fps filter due to start_time / duration miscalculations. +fate-mov-stream-shorter-than-movie: CMD = framemd5 -flags +bitexact -i $(TARGET_SAMPLES)/mov/mov_stream_shorter_than_movie.mov -vf fps=fps=24 -an + fate-mov-aac-2048-priming: CMD = run ffprobe$(PROGSSUF)$(EXESUF) -show_packets -print_format compact $(TARGET_SAMPLES)/mov/aac-2048-priming.mov fate-mov-zombie: CMD = run ffprobe$(PROGSSUF)$(EXESUF) -show_streams -show_packets -show_frames -bitexact -print_format compact $(TARGET_SAMPLES)/mov/white_zombie_scrunch-part.mov diff --git a/tests/ref/fate/mov-neg-firstpts-discard b/tests/ref/fate/mov-neg-firstpts-discard index 7c982d3ffe..2e295e3b68 100644 --- a/tests/ref/fate/mov-neg-firstpts-discard +++ b/tests/ref/fate/mov-neg-firstpts-discard @@ -1,3 +1,3 @@ [STREAM] -start_time=N/A +start_time=0.00 [/STREAM] diff --git a/tests/ref/fate/mov-stream-shorter-than-movie b/tests/ref/fate/mov-stream-shorter-than-movie new file mode 100644 index 00..28f3ef378c --- /dev/null +++ b/tests/ref/fate/mov-stream-shorter-than-movie @@ -0,0 +1,33 @@ +#format: frame checksums +#version: 2 +#hash: MD5 +#tb 0: 1/24 +#media_type 0: video +#codec_id 0: rawvideo +#dimensions 0: 640x480 +#sar 0: 0/1 +#stream#, dts,pts, duration, size, hash +0, 0, 0,1, 460800, 3a26ddfa53f09d535c701138027e49dc +0, 1, 1,1, 460800, f09fe0d079ee81eb7db617b48ab5eecf +0, 2, 2,1, 460800, 40a165b074c7f4d34a41f320400737fc +0, 3, 3,1, 460800, 8ba73359c89ebc51e29847ef0e27f7c3 +0, 4, 4,1, 460800, 0d783fcf3d37b99e7b41c0450e28f905 +0, 5, 5,1, 460800, 7251de6f3e2ebccc2183aa7090dd59fb +0, 6, 6,
Re: [FFmpeg-devel] [PATCH 2/2] qt-faststart - optimize the offset change loop
On Tue, May 29, 2018 at 02:36:28PM +, Eran Kornblau wrote: > Hi, > > The attached is a slightly more optimized (and IMHO elegant) code for > updating the stco/co64 offsets > > Thanks > > Eran > qt-faststart.c | 42 +- > 1 file changed, 25 insertions(+), 17 deletions(-) > a0f95e960800141a0a666313f2f3d82a87a3309f > 0002-qt-faststart-optimize-the-offset-change-loop.patch > From 776244b79a8bcfb5732f39fbebb9cd7fc0092bcb Mon Sep 17 00:00:00 2001 > From: erankor > Date: Tue, 29 May 2018 17:29:09 +0300 > Subject: [PATCH 2/2] qt-faststart - optimize the offset change loop > > --- > tools/qt-faststart.c | 42 +- > 1 file changed, 25 insertions(+), 17 deletions(-) > > diff --git a/tools/qt-faststart.c b/tools/qt-faststart.c > index d0ae7245f3..2ddaf87e1b 100644 > --- a/tools/qt-faststart.c > +++ b/tools/qt-faststart.c > @@ -96,9 +96,11 @@ int main(int argc, char *argv[]) > int64_t last_offset; > unsigned char *moov_atom = NULL; > unsigned char *ftyp_atom = NULL; > +unsigned char *ptr; > +unsigned char *end; > uint64_t moov_atom_size; > uint64_t ftyp_atom_size = 0; > -uint64_t i, j; > +uint64_t i; > uint32_t offset_count; > uint64_t current_offset; > int64_t start_offset = 0; > @@ -253,13 +255,16 @@ int main(int argc, char *argv[]) > printf(" bad atom size/element count\n"); > goto error_out; > } > -for (j = 0; j < offset_count; j++) { > -current_offset = BE_32(_atom[i + 12 + j * 4]); > + > +ptr = moov_atom + i + 12; > +end = ptr + offset_count * 4; > +while (ptr < end) { > +current_offset = BE_32(ptr); > current_offset += moov_atom_size; > -moov_atom[i + 12 + j * 4 + 0] = (current_offset >> 24) & > 0xFF; > -moov_atom[i + 12 + j * 4 + 1] = (current_offset >> 16) & > 0xFF; > -moov_atom[i + 12 + j * 4 + 2] = (current_offset >> 8) & > 0xFF; > -moov_atom[i + 12 + j * 4 + 3] = (current_offset >> 0) & > 0xFF; > +*ptr++ = (current_offset >> 24) & 0xFF; > +*ptr++ = (current_offset >> 16) & 0xFF; > +*ptr++ = (current_offset >> 8) & 0xFF; > +*ptr++ = (current_offset >> 0) & 0xFF; > } > i += atom_size - 4; > } else if (atom_type == CO64_ATOM) { > @@ -274,17 +279,20 @@ int main(int argc, char *argv[]) > printf(" bad atom size/element count\n"); > goto error_out; > } > -for (j = 0; j < offset_count; j++) { > -current_offset = BE_64(_atom[i + 12 + j * 8]); > + > +ptr = moov_atom + i + 12; > +end = ptr + offset_count * 8; > +while (ptr < end) { > +current_offset = BE_64(ptr); > current_offset += moov_atom_size; > -moov_atom[i + 12 + j * 8 + 0] = (current_offset >> 56) & > 0xFF; > -moov_atom[i + 12 + j * 8 + 1] = (current_offset >> 48) & > 0xFF; > -moov_atom[i + 12 + j * 8 + 2] = (current_offset >> 40) & > 0xFF; > -moov_atom[i + 12 + j * 8 + 3] = (current_offset >> 32) & > 0xFF; > -moov_atom[i + 12 + j * 8 + 4] = (current_offset >> 24) & > 0xFF; > -moov_atom[i + 12 + j * 8 + 5] = (current_offset >> 16) & > 0xFF; > -moov_atom[i + 12 + j * 8 + 6] = (current_offset >> 8) & > 0xFF; > -moov_atom[i + 12 + j * 8 + 7] = (current_offset >> 0) & > 0xFF; > +*ptr++ = (current_offset >> 56) & 0xFF; > +*ptr++ = (current_offset >> 48) & 0xFF; > +*ptr++ = (current_offset >> 40) & 0xFF; > +*ptr++ = (current_offset >> 32) & 0xFF; > +*ptr++ = (current_offset >> 24) & 0xFF; > +*ptr++ = (current_offset >> 16) & 0xFF; > +*ptr++ = (current_offset >> 8) & 0xFF; > +*ptr++ = (current_offset >> 0) & 0xFF; can this be simplfified with libavcodec/bytestream.h, libavutil/intreadwrite.h or similar ? [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Modern terrorism, a quick summary: Need oil, start war with country that has oil, kill hundread thousand in war. Let country fall into chaos, be surprised about raise of fundamantalists. Drop more bombs, kill more people, be surprised about them taking revenge and drop even more bombs and strip your own citizens of their rights and freedoms. to be continued signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] avcodec/vc1: fix overlap smoothing filter for P frames
2018-05-29 22:02 GMT+02:00, Jerome Borsboom : > The v_overlap_filter needs to run on the colocated block of the previous > macroblock. For the luma plane, the colocated block is located two blocks > on the left instead of one. > > Signed-off-by: Jerome Borsboom > --- > This should fix the issue with the SA10100.vc1 test file. > > libavcodec/vc1_loopfilter.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/libavcodec/vc1_loopfilter.c b/libavcodec/vc1_loopfilter.c > index 4c0de7c025..676922aa18 100644 > --- a/libavcodec/vc1_loopfilter.c > +++ b/libavcodec/vc1_loopfilter.c > @@ -145,8 +145,8 @@ void ff_vc1_p_overlap_filter(VC1Context *v) > > if (v->fcm != ILACE_FRAME) > for (i = 0; i < block_count; i++) { > -if (s->mb_x && v->mb_type[0][s->block_index[i] - 1] && > -(s->first_slice_line || v->mb_type[0][s->block_index[i] - > s->block_wrap[i] - 1])) > +if (s->mb_x && v->mb_type[0][s->block_index[i] - 2 + (i > 3)] > && > +(s->first_slice_line || v->mb_type[0][s->block_index[i] - > s->block_wrap[i] - 2 + (i > 3)])) This also fixes the frame number 5 and 6 of SSL0013.rcv (ticket #7171), the last two frames are still incorrect. Thank you, Carl Eugen ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH v3] avcodec/vc1: fix out-of-bounds reference pixel replication
On Tue, May 29, 2018 at 02:26:17PM +0200, Jerome Borsboom wrote: > Out-of-bounds reference pixel replication should take into account the frame > coding mode of the reference frame(s), not the frame coding mode of the > current frame. > > Signed-off-by: Jerome Borsboom > --- > Even more corrections. The starting line must also be adjusted by one for an > opposite > refence field. > > libavcodec/vc1_mc.c | 668 > ++-- > 1 file changed, 385 insertions(+), 283 deletions(-) All crashes i saw are gone thanks [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB If you think the mosad wants you dead since a long time then you are either wrong or dead since a long time. signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH] avcodec/vc1: fix overlap smoothing filter for P frames
The v_overlap_filter needs to run on the colocated block of the previous macroblock. For the luma plane, the colocated block is located two blocks on the left instead of one. Signed-off-by: Jerome Borsboom --- This should fix the issue with the SA10100.vc1 test file. libavcodec/vc1_loopfilter.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/libavcodec/vc1_loopfilter.c b/libavcodec/vc1_loopfilter.c index 4c0de7c025..676922aa18 100644 --- a/libavcodec/vc1_loopfilter.c +++ b/libavcodec/vc1_loopfilter.c @@ -145,8 +145,8 @@ void ff_vc1_p_overlap_filter(VC1Context *v) if (v->fcm != ILACE_FRAME) for (i = 0; i < block_count; i++) { -if (s->mb_x && v->mb_type[0][s->block_index[i] - 1] && -(s->first_slice_line || v->mb_type[0][s->block_index[i] - s->block_wrap[i] - 1])) +if (s->mb_x && v->mb_type[0][s->block_index[i] - 2 + (i > 3)] && +(s->first_slice_line || v->mb_type[0][s->block_index[i] - s->block_wrap[i] - 2 + (i > 3)])) vc1_v_overlap_filter(v, s->first_slice_line ? left_blk : topleft_blk, left_blk, i); if (s->mb_x == s->mb_width - 1) if (v->mb_type[0][s->block_index[i]] && -- 2.13.6 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v3 1/3] libavformat/mov: treat udta atoms within trak atoms as stream metadata
Some muxers produce mp4s with a udta (user data) atom nested within a trak atom. The nested udta atoms typically contain stream information such as the title. ffmpeg should treat nested udta atoms as applicable to the stream instead of globally. Signed-off-by: Nik Johnson --- libavformat/mov.c | 19 --- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/libavformat/mov.c b/libavformat/mov.c index f2a540ad50..b434802207 100644 --- a/libavformat/mov.c +++ b/libavformat/mov.c @@ -306,6 +306,8 @@ static int mov_read_udta_string(MOVContext *c, AVIOContext *pb, MOVAtom atom) int (*parse)(MOVContext*, AVIOContext*, unsigned, const char*) = NULL; int raw = 0; int num = 0; +AVStream *st; +AVDictionary **metadata; switch (atom.type) { case MKTAG( '@','P','R','M'): key = "premiere_version"; raw = 1; break; @@ -514,12 +516,23 @@ retry: } str[str_size] = 0; } -c->fc->event_flags |= AVFMT_EVENT_FLAG_METADATA_UPDATED; -av_dict_set(>fc->metadata, key, str, 0); + +// A udta atom may occur inside a trak atom when specifying trak +// specific user data. For example, some muxers define a trak name. +if (c->fc->nb_streams > 0 && c->trak_index != -1) { +st = c->fc->streams[c->fc->nb_streams-1]; +st->event_flags |= AVFMT_EVENT_FLAG_METADATA_UPDATED; +metadata = >metadata; +} else { +c->fc->event_flags |= AVFMT_EVENT_FLAG_METADATA_UPDATED; +metadata = >fc->metadata; +} +av_dict_set(metadata, key, str, 0); if (*language && strcmp(language, "und")) { snprintf(key2, sizeof(key2), "%s-%s", key, language); -av_dict_set(>fc->metadata, key2, str, 0); +av_dict_set(metadata, key2, str, 0); } + if (!strcmp(key, "encoder")) { int major, minor, micro; if (sscanf(str, "HandBrake %d.%d.%d", , , ) == 3) { -- 2.17.0.windows.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v3 3/3] libavformat/mov: add fate tests for parsing of trak titles from mov format
Create a fate test to verify ffprobe correctly identifies stream titles in mp4 containers. Signed-off-by: Nik Johnson --- Sample file for fate uploaded to https://www.dropbox.com/s/8itks08yf4s1pgs/trak-name.mp4?dl=0. Should be added to the fate samples under mov/trak-name.mp4 tests/fate/mov.mak | 3 +++ tests/ref/fate/mov-trak-name | 6 ++ 2 files changed, 9 insertions(+) create mode 100644 tests/ref/fate/mov-trak-name diff --git a/tests/fate/mov.mak b/tests/fate/mov.mak index eadee3abfa..e6dc72eccb 100644 --- a/tests/fate/mov.mak +++ b/tests/fate/mov.mak @@ -26,6 +26,7 @@ FATE_MOV_FFPROBE = fate-mov-neg-firstpts-discard \ fate-mov-guess-delay-1 \ fate-mov-guess-delay-2 \ fate-mov-guess-delay-3 \ + fate-mov-trak-name \ FATE_SAMPLES_AVCONV += $(FATE_MOV) FATE_SAMPLES_FFPROBE += $(FATE_MOV_FFPROBE) @@ -105,3 +106,5 @@ fate-mov-gpmf-remux: REF = 8f48e435ee1f6b7e173ea756141eabf3 fate-mov-guess-delay-1: CMD = run ffprobe$(PROGSSUF)$(EXESUF) -show_entries stream=has_b_frames -select_streams v $(TARGET_SAMPLES)/h264/h264_3bf_nopyramid_nobsrestriction.mp4 fate-mov-guess-delay-2: CMD = run ffprobe$(PROGSSUF)$(EXESUF) -show_entries stream=has_b_frames -select_streams v $(TARGET_SAMPLES)/h264/h264_3bf_pyramid_nobsrestriction.mp4 fate-mov-guess-delay-3: CMD = run ffprobe$(PROGSSUF)$(EXESUF) -show_entries stream=has_b_frames -select_streams v $(TARGET_SAMPLES)/h264/h264_4bf_pyramid_nobsrestriction.mp4 + +fate-mov-trak-name: CMD = run ffprobe$(PROGSSUF)$(EXESUF) -show_entries stream_tags=title -select_streams a $(TARGET_SAMPLES)/mov/trak-name.mp4 diff --git a/tests/ref/fate/mov-trak-name b/tests/ref/fate/mov-trak-name new file mode 100644 index 00..36d6c39d82 --- /dev/null +++ b/tests/ref/fate/mov-trak-name @@ -0,0 +1,6 @@ +[STREAM] +TAG:title=System sounds +[/STREAM] +[STREAM] +TAG:title=Microphone +[/STREAM] -- 2.17.0.windows.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v3 2/3] libavformat/mov: recognize udta name tag as the stream title
Some muxers write the stream title in a udta atom with the tag 'name'. Recognize 'name' tags as the stream title instead of an unknown tag. Signed-off-by: Nik Johnson --- libavformat/mov.c | 1 + 1 file changed, 1 insertion(+) diff --git a/libavformat/mov.c b/libavformat/mov.c index b434802207..c2675d2644 100644 --- a/libavformat/mov.c +++ b/libavformat/mov.c @@ -340,6 +340,7 @@ static int mov_read_udta_string(MOVContext *c, AVIOContext *pb, MOVAtom atom) return mov_metadata_loci(c, pb, atom.size); case MKTAG( 'm','a','n','u'): key = "make"; break; case MKTAG( 'm','o','d','l'): key = "model"; break; +case MKTAG( 'n','a','m','e'): key = "title"; raw = 1; break; case MKTAG( 'p','c','s','t'): key = "podcast"; parse = mov_metadata_int8_no_padding; break; case MKTAG( 'p','g','a','p'): key = "gapless_playback"; -- 2.17.0.windows.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] mov: Make sure PTS are both monotonically increasing, and unique
Hi, On Tue, May 29, 2018 at 5:04 PM, Sasi Inguva wrote: > Hi. sorry for the late reply. I sent a patch similar to this a while back > https://patchwork.ffmpeg.org/patch/8227/ but it got lost in the sea. You > also want to do, Sorry I missed that! I'd prefer to use your patch over mine. I'll hold off on a push for a day or two until I figure out how to handle (or not handle) the midstream PTS. - Derek ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] mov: Make sure PTS are both monotonically increasing, and unique
Hi. sorry for the late reply. I sent a patch similar to this a while back https://patchwork.ffmpeg.org/patch/8227/ but it got lost in the sea. You also want to do, @@ -3579,7 +3579,8 @@ static void mov_fix_index(MOVContext *mov, AVStream *st) frame_duration_buffer[num_discarded_begin - 1] = frame_duration; -if (first_non_zero_audio_edit > 0 && st->codecpar->codec_id != AV_CODEC_ID_VORBIS) { +if (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO && +first_non_zero_audio_edit > 0 && st->codecpar->codec_id != AV_CODEC_ID_VORBIS) { st->skip_samples += frame_duration; } so that we only increment skip samples for audio streams. Otherwise patch looks good to me. On Thu, May 17, 2018 at 7:03 AM Derek Buitenhuis wrote: > On Tue, May 15, 2018 at 8:44 PM, Derek Buitenhuis > wrote: > > We already did this for audio, but it should be done for video too. > > If we don't, seeking back to the start of the file, for example, can > > become quite broken, since the first N packets will have repeating > > and nonmonotonic PTS, yet they need to be decoded even if they are > > to be discarded. > > > > Signed-off-by: Derek Buitenhuis > > --- > > libavformat/mov.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > Ping. > > Is nobody outside Sasi able to review code in this part of > mov.c? That is slightly worrying to me. > > - Derek > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink
Thank you Marton, That makes sense to me, but can you please clarify which context is the most appropriate to use? Thanks, Jon > On May 29, 2018, at 1:41 AM, Marton Balint wrote: > > > > On Sat, 26 May 2018, Jonathan Morley wrote: > >> Attaching again from another mail client. > > Thanks. There is one issue I found: > > You are setting >video_st->metadata from the VideoInputFrameArrived > callback. That runs in a separate thread from the main thread handling > read_packet calls, and you can only invalidate video_st->metadata in the main > thread. So I suggest to store the timecode string in a context variable, and > do av_dict_set in ff_decklink_read_packet after avpacket_queue_get. > > Regards, > Marton > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH 2/2] qt-faststart - optimize the offset change loop
Hi, The attached is a slightly more optimized (and IMHO elegant) code for updating the stco/co64 offsets Thanks Eran 0002-qt-faststart-optimize-the-offset-change-loop.patch Description: 0002-qt-faststart-optimize-the-offset-change-loop.patch ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH 1/2] qt-faststart - stricter input validations
Hi, The attached patch fixes a couple of input validation issues in fast start that I noticed while going over the code Thanks Eran 0001-qt-faststart-stricter-input-validations.patch Description: 0001-qt-faststart-stricter-input-validations.patch ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [GSOC] [PATCH] DNN module introduction and SRCNN filter update
Patch 0001 pushed. Thanks. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] lavfi/tests: Fix 16-bit vf_blend test to avoid memory not aligned to 2 bytes
On 5/29/18, Andrey Semashev wrote: > On 05/24/18 00:07, Andrey Semashev wrote: >> Generic C implementation of vf_blend performs reads and writes of 16-bit >> elements, which requires the buffers to be aligned to at least 2-byte >> boundary. >> >> Also, the change fixes source buffer overrun caused by src_offset being >> added to to test handling of misaligned buffers. >> >> Fixes: #7226 > > Ping? Any comments? Should be OK. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [GSOC] [PATCH] DNN module introduction and SRCNN filter update
2018-05-29 5:14 GMT-03:00 Sergey Lavrushkin : > 2018-05-29 4:08 GMT+03:00 Pedro Arthur : >> >> 2018-05-28 19:52 GMT-03:00 Sergey Lavrushkin : >> > 2018-05-28 9:32 GMT+03:00 Guo, Yejun : >> > >> >> looks that no tensorflow dependency is introduced, a new model format >> >> is >> >> created together with some CPU implementation for inference. With >> >> this >> >> idea, Android Neural Network would be a very good reference, see >> >> https://developer.android.google.cn/ndk/guides/neuralnetworks/. It >> >> defines how the model is organized, and also provided a CPU optimized >> >> inference implementation (within the NNAPI runtime, it is open source). >> >> It >> >> is still under development but mature enough to run some popular dnn >> >> models >> >> with proper performance. We can absorb some basic design. Anyway, just >> >> a >> >> reference fyi. (btw, I'm not sure about any IP issue) >> >> >> > >> > The idea was to first introduce something to use when tensorflow is not >> > available. Here is another patch, that introduces tensorflow backend. >> I think it would be better for reviewing if you send the second patch >> in a new email. > > > Then we need to push the first patch, I think. Not necessarily, 'git send-email' may give you a glimpse of how it is done. > >> >> > >> > >> >> For this patch, I have two comments. >> >> >> >> 1. change from "DNNModel* (*load_default_model)(DNNDefaultModel >> >> model_type);" to " DNNModel* (*load_builtin_model)(DNNBuiltinModel >> >> model_type);" >> >> The DNNModule can be invoked by many filters, default model is a good >> >> name at the filter level, while built-in model is better within the DNN >> >> scope. >> >> >> >> typedef struct DNNModule{ >> >> // Loads model and parameters from given file. Returns NULL if it >> >> is >> >> not possible. >> >> DNNModel* (*load_model)(const char* model_filename); >> >> // Loads one of the default models >> >> DNNModel* (*load_default_model)(DNNDefaultModel model_type); >> >> // Executes model with specified input and output. Returns >> >> DNN_ERROR >> >> otherwise. >> >> DNNReturnType (*execute_model)(const DNNModel* model); >> >> // Frees memory allocated for model. >> >> void (*free_model)(DNNModel** model); >> >> } DNNModule; >> >> >> >> >> >> 2. add a new variable 'number' for DNNData/InputParams >> >> As a typical DNN concept, the data shape usually is: > >> width, channel> or , the last component >> >> denotes its index changes the fastest in the memory. We can add this >> >> concept into the API, and decide to support or or both. >> > >> > >> > I did not add number of elements in batch because I thought, that we >> > would >> > not feed more than one element at once to a network in a ffmpeg filter. >> > But it can be easily added if necessary. >> > >> > So here is the patch that adds tensorflow backend with the previous >> > patch. >> > I forgot to change include guards from AVUTIL_* to AVFILTER_* in it. >> You moved the files from libavutil to libavfilter while it was >> proposed to move them to libavformat. > > > Not only, it was also proposed to move it to libavfilter if it is going to > be used only > in filters. I do not know if this module is useful anywhere else besides > libavfilter. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] lavfi/tests: Fix 16-bit vf_blend test to avoid memory not aligned to 2 bytes
On 05/24/18 00:07, Andrey Semashev wrote: Generic C implementation of vf_blend performs reads and writes of 16-bit elements, which requires the buffers to be aligned to at least 2-byte boundary. Also, the change fixes source buffer overrun caused by src_offset being added to to test handling of misaligned buffers. Fixes: #7226 Ping? Any comments? --- tests/checkasm/vf_blend.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/checkasm/vf_blend.c b/tests/checkasm/vf_blend.c index 912f3a2c38..a7578fec39 100644 --- a/tests/checkasm/vf_blend.c +++ b/tests/checkasm/vf_blend.c @@ -71,7 +71,7 @@ w = WIDTH / depth; \ \ for (i = 0; i < BUF_UNITS - 1; i++) { \ -int src_offset = i * SIZE_PER_UNIT + i; /* Test various alignments */ \ +int src_offset = i * SIZE_PER_UNIT + (BUF_UNITS - 1 - i) * depth; /* Test various alignments */ \ int dst_offset = i * SIZE_PER_UNIT; /* dst must be aligned */ \ randomize_buffers(); \ call_ref(top1 + src_offset, w, bot1 + src_offset, w, \ ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH v3] avcodec/vc1: fix out-of-bounds reference pixel replication
Out-of-bounds reference pixel replication should take into account the frame coding mode of the reference frame(s), not the frame coding mode of the current frame. Signed-off-by: Jerome Borsboom --- Even more corrections. The starting line must also be adjusted by one for an opposite refence field. libavcodec/vc1_mc.c | 668 ++-- 1 file changed, 385 insertions(+), 283 deletions(-) diff --git a/libavcodec/vc1_mc.c b/libavcodec/vc1_mc.c index 04b359204c..1b8d8799b3 100644 --- a/libavcodec/vc1_mc.c +++ b/libavcodec/vc1_mc.c @@ -179,12 +179,17 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir) int i; uint8_t (*luty)[256], (*lutuv)[256]; int use_ic; +int interlace; +int linesize, uvlinesize; if ((!v->field_mode || (v->ref_field_type[dir] == 1 && v->cur_field_type == 1)) && !v->s.last_picture.f->data[0]) return; +linesize = s->current_picture_ptr->f->linesize[0]; +uvlinesize = s->current_picture_ptr->f->linesize[1]; + mx = s->mv[dir][0][0]; my = s->mv[dir][0][1]; @@ -220,6 +225,7 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir) luty = v->curr_luty; lutuv = v->curr_lutuv; use_ic = *v->curr_use_ic; +interlace = 1; } else { srcY = s->last_picture.f->data[0]; srcU = s->last_picture.f->data[1]; @@ -227,6 +233,7 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir) luty = v->last_luty; lutuv = v->last_lutuv; use_ic = v->last_use_ic; +interlace = s->last_picture.f->interlaced_frame; } } else { srcY = s->next_picture.f->data[0]; @@ -235,6 +242,7 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir) luty = v->next_luty; lutuv = v->next_lutuv; use_ic = v->next_use_ic; +interlace = s->next_picture.f->interlaced_frame; } if (!srcY || !srcU) { @@ -269,9 +277,9 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir) srcV += uvsrc_y * s->uvlinesize + uvsrc_x; if (v->field_mode && v->ref_field_type[dir]) { -srcY += s->current_picture_ptr->f->linesize[0]; -srcU += s->current_picture_ptr->f->linesize[1]; -srcV += s->current_picture_ptr->f->linesize[2]; +srcY += linesize; +srcU += uvlinesize; +srcV += uvlinesize; } /* for grayscale we should not try to read from unknown area */ @@ -289,112 +297,105 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir) const int k = 17 + s->mspel * 2; srcY -= s->mspel * (1 + s->linesize); -if (v->fcm == ILACE_FRAME) { -if (src_y - s->mspel & 1) { -s->vdsp.emulated_edge_mc(s->sc.edge_emu_buffer, - srcY, - 2 * s->linesize, - 2 * s->linesize, - k, - k + 1 >> 1, - src_x - s->mspel, - src_y - s->mspel >> 1, - s->h_edge_pos, - v_edge_pos + 1 >> 1); -s->vdsp.emulated_edge_mc(s->sc.edge_emu_buffer + s->linesize, - srcY + s->linesize, - 2 * s->linesize, - 2 * s->linesize, - k, - k >> 1, - src_x - s->mspel, - src_y - s->mspel + 1 >> 1, - s->h_edge_pos, - v_edge_pos >> 1); -} else { -s->vdsp.emulated_edge_mc(s->sc.edge_emu_buffer, - srcY, - 2 * s->linesize, - 2 * s->linesize, - k, - k + 1 >> 1, - src_x - s->mspel, - src_y - s->mspel >> 1, - s->h_edge_pos, - v_edge_pos >> 1); -s->vdsp.emulated_edge_mc(s->sc.edge_emu_buffer + s->linesize, - srcY + s->linesize, - 2 * s->linesize, - 2 * s->linesize, +if (interlace) { +s->vdsp.emulated_edge_mc(s->sc.edge_emu_buffer, + srcY, + linesize << 1, + linesize << 1, +
Re: [FFmpeg-devel] [PATCH 2/2] ffmpeg: Use the colour properties from the input stream when doing transcode
On 28.05.2018 09:30, Xiang, Haihao wrote: On Sat, 2018-05-26 at 17:29 +0100, Mark Thompson wrote: On 25/05/18 07:57, Tobias Rapp wrote: On 25.05.2018 07:58, Xiang, Haihao wrote: On Thu, 2018-05-24 at 11:15 +0100, Mark Thompson wrote: For example: ffmpeg -i bt709_input.mkv -vf colorspace=bt2020 bt2020_output.mkv will have the output file marked as BT.709 after this patch, where previously it was "unspecified". (Explicitly setting -color_primaries/-color_trc/- colorspace on the output works in both cases.) I agree with you it's not worse than before as we don't get the expected result in both cases. Not quite: When a file says "I don't know this property value" you have a chance to lookup the value somewhere else or use a default. When it says "I know the value" and gives a wrong value, you completely loose trust. Right, that is a compelling argument. I agree with you, so I definitely won't apply the patch in this form. According the comment in avcodec.h, the color properties in AVCodecContext should be set by user for encoding. I think ffmpeg is the user in the case below. Where are the color properties set if we don't set the default values in init_output_stream_encode()? ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input-with-hdr.mkv -c:v hevc_vaapi -profile:v main10 output.h265 Setting color properties in init_output_stream_encode() basically looks OK, but the question "when" seems more important than "where". It should be called after some output frame is available, and use that to initialize the encoder. Currently the current AVFrame is not available in init_output_stream_encode(), only the buffersink properties. Hopefully somebody with more knowledge about FFmpeg infrastructure will make a recommendation on how this should be solved: By adding color properties and other frame data to the buffersink interface? Or by making the current/last frame available in init_output_stream_encode()? Or ...? It doesn't need to be auto-negotiated like in the mentioned patch series for color_range, it just needs to be forwarded properly from filter graph output to the encoder. Regards, Tobias ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] lavfi: add opencl tonemap filter.
I see no obvious issues with the algorithm. (Though I haven't tested it) So "LGTM" On Tue, 29 May 2018 13:54:27 +0800, Ruiling Song wrote: > This filter does HDR(HDR10/HLG) to SDR conversion with tone-mapping. > > An example command to use this filter with vaapi codecs: > FFMPEG -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device \ > opencl=ocl@va -hwaccel vaapi -hwaccel_device va -hwaccel_output_format \ > vaapi -i INPUT -filter_hw_device ocl -filter_complex \ > '[0:v]hwmap,tonemap_opencl=t=bt2020:tonemap=linear:format=p010[x1]; \ > [x1]hwmap=derive_device=vaapi:reverse=1' -c:v hevc_vaapi -profile 2 OUTPUT > > v2: > add peak detection. > > Signed-off-by: Ruiling Song > --- > configure | 1 + > libavfilter/Makefile | 2 + > libavfilter/allfilters.c | 1 + > libavfilter/colorspace_basic.c | 89 + > libavfilter/colorspace_basic.h | 40 ++ > libavfilter/opencl/colorspace_basic.cl | 187 ++ > libavfilter/opencl/tonemap.cl | 278 ++ > libavfilter/opencl_source.h| 2 + > libavfilter/vf_tonemap_opencl.c| 655 > + > 9 files changed, 1255 insertions(+) > create mode 100644 libavfilter/colorspace_basic.c > create mode 100644 libavfilter/colorspace_basic.h > create mode 100644 libavfilter/opencl/colorspace_basic.cl > create mode 100644 libavfilter/opencl/tonemap.cl > create mode 100644 libavfilter/vf_tonemap_opencl.c > > diff --git a/configure b/configure > index e52f8f8..ee3586b 100755 > --- a/configure > +++ b/configure > @@ -3401,6 +3401,7 @@ tinterlace_filter_deps="gpl" > tinterlace_merge_test_deps="tinterlace_filter" > tinterlace_pad_test_deps="tinterlace_filter" > tonemap_filter_deps="const_nan" > +tonemap_opencl_filter_deps="opencl" > unsharp_opencl_filter_deps="opencl" > uspp_filter_deps="gpl avcodec" > vaguedenoiser_filter_deps="gpl" > diff --git a/libavfilter/Makefile b/libavfilter/Makefile > index c68ef05..0915656 100644 > --- a/libavfilter/Makefile > +++ b/libavfilter/Makefile > @@ -352,6 +352,8 @@ OBJS-$(CONFIG_TINTERLACE_FILTER) += > vf_tinterlace.o > OBJS-$(CONFIG_TLUT2_FILTER) += vf_lut2.o framesync.o > OBJS-$(CONFIG_TMIX_FILTER) += vf_mix.o framesync.o > OBJS-$(CONFIG_TONEMAP_FILTER)+= vf_tonemap.o > +OBJS-$(CONFIG_TONEMAP_OPENCL_FILTER) += vf_tonemap_opencl.o > colorspace_basic.o opencl.o \ > +opencl/tonemap.o > opencl/colorspace_basic.o > OBJS-$(CONFIG_TRANSPOSE_FILTER) += vf_transpose.o > OBJS-$(CONFIG_TRIM_FILTER) += trim.o > OBJS-$(CONFIG_UNPREMULTIPLY_FILTER) += vf_premultiply.o framesync.o > diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c > index b44093d..6873bab 100644 > --- a/libavfilter/allfilters.c > +++ b/libavfilter/allfilters.c > @@ -343,6 +343,7 @@ extern AVFilter ff_vf_tinterlace; > extern AVFilter ff_vf_tlut2; > extern AVFilter ff_vf_tmix; > extern AVFilter ff_vf_tonemap; > +extern AVFilter ff_vf_tonemap_opencl; > extern AVFilter ff_vf_transpose; > extern AVFilter ff_vf_trim; > extern AVFilter ff_vf_unpremultiply; > diff --git a/libavfilter/colorspace_basic.c b/libavfilter/colorspace_basic.c > new file mode 100644 > index 000..93f9f08 > --- /dev/null > +++ b/libavfilter/colorspace_basic.c > @@ -0,0 +1,89 @@ > +/* > + * This file is part of FFmpeg. > + * > + * FFmpeg is free software; you can redistribute it and/or > + * modify it under the terms of the GNU Lesser General Public > + * License as published by the Free Software Foundation; either > + * version 2.1 of the License, or (at your option) any later version. > + * > + * FFmpeg is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > + * Lesser General Public License for more details. > + * > + * You should have received a copy of the GNU Lesser General Public > + * License along with FFmpeg; if not, write to the Free Software > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 > USA > + */ > + > +#include "colorspace_basic.h" > + > + > +void invert_matrix3x3(const double in[3][3], double out[3][3]) > +{ > +double m00 = in[0][0], m01 = in[0][1], m02 = in[0][2], > + m10 = in[1][0], m11 = in[1][1], m12 = in[1][2], > + m20 = in[2][0], m21 = in[2][1], m22 = in[2][2]; > +int i, j; > +double det; > + > +out[0][0] = (m11 * m22 - m21 * m12); > +out[0][1] = -(m01 * m22 - m21 * m02); > +out[0][2] = (m01 * m12 - m11 * m02); > +out[1][0] = -(m10 * m22 - m20 * m12); > +out[1][1] = (m00 * m22 - m20 * m02); > +out[1][2] = -(m00 * m12 - m10 * m02); > +out[2][0] = (m10 * m21 - m20 * m11); > +out[2][1] = -(m00 * m21 - m20 *
Re: [FFmpeg-devel] [PATCH v2] avcodec/vc1: fix out-of-bounds reference pixel replication
Out-of-bounds reference pixel replication should take into account the frame coding mode of the reference frame(s), not the frame coding mode of the current frame. Signed-off-by: Jerome Borsboom --- Does this resolve the SIGSEGV? I think I made a mistake in the calculation of the starting line for progressive reference pictures when the current picture is a field interlaced picture. Instead of adjusting the edge position, the starting line must be adjusted as the vertical stride for replication is half the stride of the field interlaced picture. libavcodec/vc1_mc.c | 659 ++-- 1 file changed, 379 insertions(+), 280 deletions(-) diff --git a/libavcodec/vc1_mc.c b/libavcodec/vc1_mc.c index 04b359204c..16fc531712 100644 --- a/libavcodec/vc1_mc.c +++ b/libavcodec/vc1_mc.c @@ -179,12 +179,17 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir) int i; uint8_t (*luty)[256], (*lutuv)[256]; int use_ic; +int interlace; +int linesize, uvlinesize; if ((!v->field_mode || (v->ref_field_type[dir] == 1 && v->cur_field_type == 1)) && !v->s.last_picture.f->data[0]) return; +linesize = s->current_picture_ptr->f->linesize[0]; +uvlinesize = s->current_picture_ptr->f->linesize[1]; + mx = s->mv[dir][0][0]; my = s->mv[dir][0][1]; @@ -220,6 +225,7 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir) luty = v->curr_luty; lutuv = v->curr_lutuv; use_ic = *v->curr_use_ic; +interlace = 1; } else { srcY = s->last_picture.f->data[0]; srcU = s->last_picture.f->data[1]; @@ -227,6 +233,7 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir) luty = v->last_luty; lutuv = v->last_lutuv; use_ic = v->last_use_ic; +interlace = s->last_picture.f->interlaced_frame; } } else { srcY = s->next_picture.f->data[0]; @@ -235,6 +242,7 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir) luty = v->next_luty; lutuv = v->next_lutuv; use_ic = v->next_use_ic; +interlace = s->next_picture.f->interlaced_frame; } if (!srcY || !srcU) { @@ -269,9 +277,9 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir) srcV += uvsrc_y * s->uvlinesize + uvsrc_x; if (v->field_mode && v->ref_field_type[dir]) { -srcY += s->current_picture_ptr->f->linesize[0]; -srcU += s->current_picture_ptr->f->linesize[1]; -srcV += s->current_picture_ptr->f->linesize[2]; +srcY += linesize; +srcU += uvlinesize; +srcV += uvlinesize; } /* for grayscale we should not try to read from unknown area */ @@ -289,112 +297,104 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir) const int k = 17 + s->mspel * 2; srcY -= s->mspel * (1 + s->linesize); -if (v->fcm == ILACE_FRAME) { -if (src_y - s->mspel & 1) { -s->vdsp.emulated_edge_mc(s->sc.edge_emu_buffer, - srcY, - 2 * s->linesize, - 2 * s->linesize, - k, - k + 1 >> 1, - src_x - s->mspel, - src_y - s->mspel >> 1, - s->h_edge_pos, - v_edge_pos + 1 >> 1); -s->vdsp.emulated_edge_mc(s->sc.edge_emu_buffer + s->linesize, - srcY + s->linesize, - 2 * s->linesize, - 2 * s->linesize, - k, - k >> 1, - src_x - s->mspel, - src_y - s->mspel + 1 >> 1, - s->h_edge_pos, - v_edge_pos >> 1); -} else { -s->vdsp.emulated_edge_mc(s->sc.edge_emu_buffer, - srcY, - 2 * s->linesize, - 2 * s->linesize, - k, - k + 1 >> 1, - src_x - s->mspel, - src_y - s->mspel >> 1, - s->h_edge_pos, - v_edge_pos >> 1); -s->vdsp.emulated_edge_mc(s->sc.edge_emu_buffer + s->linesize, - srcY + s->linesize, - 2 * s->linesize, - 2 * s->linesize, +
Re: [FFmpeg-devel] [PATCH] lavfi: add opencl tonemap filter.
> -Original Message- > From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf > Of myp...@gmail.com > Sent: Tuesday, May 29, 2018 3:40 PM > To: FFmpeg development discussions and patches de...@ffmpeg.org> > Subject: Re: [FFmpeg-devel] [PATCH] lavfi: add opencl tonemap filter. > > 2018-05-29 13:54 GMT+08:00 Ruiling Song : > > This filter does HDR(HDR10/HLG) to SDR conversion with tone-mapping. > > > > An example command to use this filter with vaapi codecs: > > FFMPEG -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device \ > > opencl=ocl@va -hwaccel vaapi -hwaccel_device va - > hwaccel_output_format \ > > vaapi -i INPUT -filter_hw_device ocl -filter_complex \ > > '[0:v]hwmap,tonemap_opencl=t=bt2020:tonemap=linear:format=p010[x1]; > \ > > [x1]hwmap=derive_device=vaapi:reverse=1' -c:v hevc_vaapi -profile 2 > OUTPUT > > > > v2: > > add peak detection. > > > > Signed-off-by: Ruiling Song > > +static int tonemap_opencl_config_output(AVFilterLink *outlink) > > +{ > > +AVFilterContext *avctx = outlink->src; > > +TonemapOpenCLContext *s = avctx->priv; > > +int ret; > > +if (s->format == AV_PIX_FMT_NONE) > > +av_log(avctx, AV_LOG_WARNING, "format not set, use default > format NV12\n"); > I think we can give a default format with AV_PIX_FMT_NV12 in > tonemap_opencl_options[] for this case > and I think now we only support NV12/P010 output in current implement. Sounds good. > > +{ "format","output pixel format", OFFSET(format), > AV_OPT_TYPE_PIXEL_FMT, {.i64 = AV_PIX_FMT_NONE}, > AV_PIX_FMT_NONE, AV_PIX_FMT_GBRAP12LE, FLAGS, "fmt" }, > Missing the sub-option nv12 and p010 ? Seems like using AV_OPT_TYPE_PIXEL_FMT, the framework parsed the user format argument correctly. So I think no need to add sub-options? Thanks! Ruiling > > +{ "peak", "signal peak override", OFFSET(peak), > AV_OPT_TYPE_DOUBLE, {.dbl = 0}, 0, DBL_MAX, FLAGS }, > > +{ "param", "tonemap parameter", OFFSET(param), > AV_OPT_TYPE_DOUBLE, {.dbl = NAN}, DBL_MIN, DBL_MAX, FLAGS }, > > +{ "desat", "desaturation parameter", OFFSET(desat_param), > AV_OPT_TYPE_DOUBLE, {.dbl = 0.5}, 0, DBL_MAX, FLAGS }, > > +{ "threshold", "scene detection threshold", OFFSET(scene_threshold), > AV_OPT_TYPE_DOUBLE, {.dbl = 0.2}, 0, DBL_MAX, FLAGS }, > > +{ NULL } > > +}; > > + > > +AVFILTER_DEFINE_CLASS(tonemap_opencl); > > + > > +static const AVFilterPad tonemap_opencl_inputs[] = { > > +{ > > +.name = "default", > > +.type = AVMEDIA_TYPE_VIDEO, > > +.filter_frame = _opencl_filter_frame, > > +.config_props = _opencl_filter_config_input, > > +}, > > +{ NULL } > > +}; > > + > > +static const AVFilterPad tonemap_opencl_outputs[] = { > > +{ > > +.name = "default", > > +.type = AVMEDIA_TYPE_VIDEO, > > +.config_props = _opencl_config_output, > > +}, > > +{ NULL } > > +}; > > + > > +AVFilter ff_vf_tonemap_opencl = { > > +.name = "tonemap_opencl", > > +.description= NULL_IF_CONFIG_SMALL("perform HDR to SDR > conversion with tonemapping"), > > +.priv_size = sizeof(TonemapOpenCLContext), > > +.priv_class = _opencl_class, > > +.init = _opencl_filter_init, > > +.uninit = _opencl_uninit, > > +.query_formats = _opencl_filter_query_formats, > > +.inputs = tonemap_opencl_inputs, > > +.outputs= tonemap_opencl_outputs, > > +.flags_internal = FF_FILTER_FLAG_HWFRAME_AWARE, > > +}; > > -- > > 2.7.4 > > > > ___ > > ffmpeg-devel mailing list > > ffmpeg-devel@ffmpeg.org > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [GSOC] [PATCH] DNN module introduction and SRCNN filter update
2018-05-29 4:08 GMT+03:00 Pedro Arthur : > 2018-05-28 19:52 GMT-03:00 Sergey Lavrushkin : > > 2018-05-28 9:32 GMT+03:00 Guo, Yejun : > > > >> looks that no tensorflow dependency is introduced, a new model format is > >> created together with some CPU implementation for inference. With this > >> idea, Android Neural Network would be a very good reference, see > >> https://developer.android.google.cn/ndk/guides/neuralnetworks/. It > >> defines how the model is organized, and also provided a CPU optimized > >> inference implementation (within the NNAPI runtime, it is open source). > It > >> is still under development but mature enough to run some popular dnn > models > >> with proper performance. We can absorb some basic design. Anyway, just a > >> reference fyi. (btw, I'm not sure about any IP issue) > >> > > > > The idea was to first introduce something to use when tensorflow is not > > available. Here is another patch, that introduces tensorflow backend. > I think it would be better for reviewing if you send the second patch > in a new email. Then we need to push the first patch, I think. > > > > > >> For this patch, I have two comments. > >> > >> 1. change from "DNNModel* (*load_default_model)(DNNDefaultModel > >> model_type);" to " DNNModel* (*load_builtin_model)(DNNBuiltinModel > >> model_type);" > >> The DNNModule can be invoked by many filters, default model is a good > >> name at the filter level, while built-in model is better within the DNN > >> scope. > >> > >> typedef struct DNNModule{ > >> // Loads model and parameters from given file. Returns NULL if it is > >> not possible. > >> DNNModel* (*load_model)(const char* model_filename); > >> // Loads one of the default models > >> DNNModel* (*load_default_model)(DNNDefaultModel model_type); > >> // Executes model with specified input and output. Returns DNN_ERROR > >> otherwise. > >> DNNReturnType (*execute_model)(const DNNModel* model); > >> // Frees memory allocated for model. > >> void (*free_model)(DNNModel** model); > >> } DNNModule; > >> > >> > >> 2. add a new variable 'number' for DNNData/InputParams > >> As a typical DNN concept, the data shape usually is: >> width, channel> or , the last component > >> denotes its index changes the fastest in the memory. We can add this > >> concept into the API, and decide to support or or both. > > > > > > I did not add number of elements in batch because I thought, that we > would > > not feed more than one element at once to a network in a ffmpeg filter. > > But it can be easily added if necessary. > > > > So here is the patch that adds tensorflow backend with the previous > patch. > > I forgot to change include guards from AVUTIL_* to AVFILTER_* in it. > You moved the files from libavutil to libavfilter while it was > proposed to move them to libavformat. Not only, it was also proposed to move it to libavfilter if it is going to be used only in filters. I do not know if this module is useful anywhere else besides libavfilter. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink
On Sat, 26 May 2018, Jonathan Morley wrote: Attaching again from another mail client. Thanks. There is one issue I found: You are setting >video_st->metadata from the VideoInputFrameArrived callback. That runs in a separate thread from the main thread handling read_packet calls, and you can only invalidate video_st->metadata in the main thread. So I suggest to store the timecode string in a context variable, and do av_dict_set in ff_decklink_read_packet after avpacket_queue_get. Regards, Marton ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH]ffplay: Mention codec_name if decoder for codec_id could not be found.
On Tue, 29 May 2018, Carl Eugen Hoyos wrote: Hi! Attached patch makes debugging a little easier imo. Please comment, Carl Eugen diff --git a/fftools/ffplay.c b/fftools/ffplay.c index dcca9c2..f9571d7 100644 --- a/fftools/ffplay.c +++ b/fftools/ffplay.c @@ -2578,7 +2578,7 @@ static int stream_component_open(VideoState *is, int stream_index) if (forced_codec_name) av_log(NULL, AV_LOG_WARNING, "No codec could be found with name '%s'\n", forced_codec_name); else av_log(NULL, AV_LOG_WARNING, - "No codec could be found with id %d\n", avctx->codec_id); + "No codec could be found with id %d (%s)\n", avctx->codec_id, avcodec_get_name(avctx->codec_id)); Maybe go one step further, and change the error message to "No decoder could be found for codec %s\n", avcodec_get_name(avctx->codec_id) I don't see any use for dumping the codec_id, it has no usefulness to the end user IMHO. Regards, Marton ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] lavfi: add opencl tonemap filter.
2018-05-29 13:54 GMT+08:00 Ruiling Song : > This filter does HDR(HDR10/HLG) to SDR conversion with tone-mapping. > > An example command to use this filter with vaapi codecs: > FFMPEG -init_hw_device vaapi=va:/dev/dri/renderD128 -init_hw_device \ > opencl=ocl@va -hwaccel vaapi -hwaccel_device va -hwaccel_output_format \ > vaapi -i INPUT -filter_hw_device ocl -filter_complex \ > '[0:v]hwmap,tonemap_opencl=t=bt2020:tonemap=linear:format=p010[x1]; \ > [x1]hwmap=derive_device=vaapi:reverse=1' -c:v hevc_vaapi -profile 2 OUTPUT > > v2: > add peak detection. > > Signed-off-by: Ruiling Song > --- > configure | 1 + > libavfilter/Makefile | 2 + > libavfilter/allfilters.c | 1 + > libavfilter/colorspace_basic.c | 89 + > libavfilter/colorspace_basic.h | 40 ++ > libavfilter/opencl/colorspace_basic.cl | 187 ++ > libavfilter/opencl/tonemap.cl | 278 ++ > libavfilter/opencl_source.h| 2 + > libavfilter/vf_tonemap_opencl.c| 655 > + > 9 files changed, 1255 insertions(+) > create mode 100644 libavfilter/colorspace_basic.c > create mode 100644 libavfilter/colorspace_basic.h > create mode 100644 libavfilter/opencl/colorspace_basic.cl > create mode 100644 libavfilter/opencl/tonemap.cl > create mode 100644 libavfilter/vf_tonemap_opencl.c > > diff --git a/configure b/configure > index e52f8f8..ee3586b 100755 > --- a/configure > +++ b/configure > @@ -3401,6 +3401,7 @@ tinterlace_filter_deps="gpl" > tinterlace_merge_test_deps="tinterlace_filter" > tinterlace_pad_test_deps="tinterlace_filter" > tonemap_filter_deps="const_nan" > +tonemap_opencl_filter_deps="opencl" > unsharp_opencl_filter_deps="opencl" > uspp_filter_deps="gpl avcodec" > vaguedenoiser_filter_deps="gpl" > diff --git a/libavfilter/Makefile b/libavfilter/Makefile > index c68ef05..0915656 100644 > --- a/libavfilter/Makefile > +++ b/libavfilter/Makefile > @@ -352,6 +352,8 @@ OBJS-$(CONFIG_TINTERLACE_FILTER) += > vf_tinterlace.o > OBJS-$(CONFIG_TLUT2_FILTER) += vf_lut2.o framesync.o > OBJS-$(CONFIG_TMIX_FILTER) += vf_mix.o framesync.o > OBJS-$(CONFIG_TONEMAP_FILTER)+= vf_tonemap.o > +OBJS-$(CONFIG_TONEMAP_OPENCL_FILTER) += vf_tonemap_opencl.o > colorspace_basic.o opencl.o \ > +opencl/tonemap.o > opencl/colorspace_basic.o > OBJS-$(CONFIG_TRANSPOSE_FILTER) += vf_transpose.o > OBJS-$(CONFIG_TRIM_FILTER) += trim.o > OBJS-$(CONFIG_UNPREMULTIPLY_FILTER) += vf_premultiply.o framesync.o > diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c > index b44093d..6873bab 100644 > --- a/libavfilter/allfilters.c > +++ b/libavfilter/allfilters.c > @@ -343,6 +343,7 @@ extern AVFilter ff_vf_tinterlace; > extern AVFilter ff_vf_tlut2; > extern AVFilter ff_vf_tmix; > extern AVFilter ff_vf_tonemap; > +extern AVFilter ff_vf_tonemap_opencl; > extern AVFilter ff_vf_transpose; > extern AVFilter ff_vf_trim; > extern AVFilter ff_vf_unpremultiply; > diff --git a/libavfilter/colorspace_basic.c b/libavfilter/colorspace_basic.c > new file mode 100644 > index 000..93f9f08 > --- /dev/null > +++ b/libavfilter/colorspace_basic.c > @@ -0,0 +1,89 @@ > +/* > + * This file is part of FFmpeg. > + * > + * FFmpeg is free software; you can redistribute it and/or > + * modify it under the terms of the GNU Lesser General Public > + * License as published by the Free Software Foundation; either > + * version 2.1 of the License, or (at your option) any later version. > + * > + * FFmpeg is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > + * Lesser General Public License for more details. > + * > + * You should have received a copy of the GNU Lesser General Public > + * License along with FFmpeg; if not, write to the Free Software > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 > USA > + */ > + > +#include "colorspace_basic.h" > + > + > +void invert_matrix3x3(const double in[3][3], double out[3][3]) > +{ > +double m00 = in[0][0], m01 = in[0][1], m02 = in[0][2], > + m10 = in[1][0], m11 = in[1][1], m12 = in[1][2], > + m20 = in[2][0], m21 = in[2][1], m22 = in[2][2]; > +int i, j; > +double det; > + > +out[0][0] = (m11 * m22 - m21 * m12); > +out[0][1] = -(m01 * m22 - m21 * m02); > +out[0][2] = (m01 * m12 - m11 * m02); > +out[1][0] = -(m10 * m22 - m20 * m12); > +out[1][1] = (m00 * m22 - m20 * m02); > +out[1][2] = -(m00 * m12 - m10 * m02); > +out[2][0] = (m10 * m21 - m20 * m11); > +out[2][1] = -(m00 * m21 - m20 * m01); > +out[2][2] = (m00 * m11 - m10 * m01); > + > +det = m00 * out[0][0] + m10 * out[0][1] +