Re: [FFmpeg-devel] [PATCH] Print out numeric values of option constants

2020-02-24 Thread Moritz Barsnick
On Tue, Feb 18, 2020 at 02:40:49 +, Soft Works wrote:
> It's often not obvious how option constants relate to numerical values.
> Defaults are sometimes printed as numbers and from/to are always printed as 
> numbers.
> Printing the numeric values of options constants avoids this confusion.
> It also allows to see which constants are equivalent.

Was this resent by accident? It was already pushed as 
9e0a071edec93a7bd23f389fb1724ec6b43f8304
quite a long time ago.

Moritz
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] libswscale/x86/yuv2rgb: Fix Segmentation Fault when load unaligned data

2020-02-24 Thread Ting Fu
Signed-off-by: Ting Fu 
---
 libswscale/x86/yuv_2_rgb.asm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/libswscale/x86/yuv_2_rgb.asm b/libswscale/x86/yuv_2_rgb.asm
index e05bbb89f5..575a84d921 100644
--- a/libswscale/x86/yuv_2_rgb.asm
+++ b/libswscale/x86/yuv_2_rgb.asm
@@ -139,7 +139,7 @@ cglobal %1_420_%2%3, GPR_num, GPR_num, reg_num, parameters
 VBROADCASTSD vr_coff,  [pointer_c_ditherq + 4  * 8]
 %endif
 %endif
-mova m_y, [py_2indexq + 2 * indexq]
+movu m_y, [py_2indexq + 2 * indexq]
 movh m_u, [pu_indexq  + indexq]
 movh m_v, [pv_indexq  + indexq]
 .loop0:
@@ -347,7 +347,7 @@ cglobal %1_420_%2%3, GPR_num, GPR_num, reg_num, parameters
 %endif ; PACK RGB15/16
 %endif ; PACK RGB15/16/32
 
-mova m_y, [py_2indexq + 2 * indexq + 8 * time_num]
+movu m_y, [py_2indexq + 2 * indexq + 8 * time_num]
 movh m_v, [pv_indexq  + indexq + 4 * time_num]
 movh m_u, [pu_indexq  + indexq + 4 * time_num]
 add imageq, 8 * depth * time_num
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 1/4] avfilter/vf_sr.c: refine code to use AVPixFmtDescriptor.log2_chroma_h/w

2020-02-24 Thread Guo, Yejun


> -Original Message-
> From: Pedro Arthur [mailto:bygran...@gmail.com]
> Sent: Monday, February 24, 2020 11:30 PM
> To: FFmpeg development discussions and patches 
> Cc: Guo, Yejun 
> Subject: Re: [FFmpeg-devel] [PATCH 1/4] avfilter/vf_sr.c: refine code to use
> AVPixFmtDescriptor.log2_chroma_h/w
> 
> Em seg., 24 de fev. de 2020 às 05:50, Guo, Yejun 
> escreveu:
> >
> > Signed-off-by: Guo, Yejun 
> > ---
> >  libavfilter/vf_sr.c | 40 ++--
> >  1 file changed, 6 insertions(+), 34 deletions(-)
> >
> > diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c
> > index 562b030..f000eda 100644
> > --- a/libavfilter/vf_sr.c
> > +++ b/libavfilter/vf_sr.c
> desc->log2_chroma_w);
> > +
> >  sr_context->sws_contexts[0] = sws_getContext(sws_src_w,
> sws_src_h, AV_PIX_FMT_GRAY8,
> >
> sws_dst_w, sws_dst_h, AV_PIX_FMT_GRAY8,
> >
> SWS_BICUBIC, NULL, NULL, NULL);
> > --
> > 2.7.4
> >
> LGTM

Thanks. 

Since there is an issue in patch 3 and patch 4, I will send out V2 patch set 
without this one.

> 
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v4 18/21] cbs_h265: Add functions to turn HDR metadata into SEI

2020-02-24 Thread Vittorio Giovara
On Mon, Feb 24, 2020 at 5:18 PM Mark Thompson  wrote:

> On 24/02/2020 21:28, Vittorio Giovara wrote:
> > On Sun, Feb 23, 2020 at 6:41 PM Mark Thompson  wrote:
> >
> >> ---
> >>  libavcodec/Makefile   |  2 +-
> >>  libavcodec/cbs_h265.c | 99 +++
> >>  libavcodec/cbs_h265.h | 18 
> >>  3 files changed, 118 insertions(+), 1 deletion(-)
> >>  create mode 100644 libavcodec/cbs_h265.c
> >>
> >> ...
> >> +void
> >>
> ff_cbs_h265_fill_sei_mastering_display(H265RawSEIMasteringDisplayColourVolume
> >> *mdcv,
> >> +const
> >> AVMasteringDisplayMetadata *mdm)
> >> +{
> >> +memset(mdcv, 0, sizeof(*mdcv));
> >> +
> >> +if (mdm->has_primaries) {
> >> +// The values in the metadata structure are fractions between 0
> >> and 1,
> >> +// while the SEI message contains fixed-point values with an
> >> increment
> >> +// of 0.2.  So, scale up by 5 to convert between them.
> >> +
> >> +for (int a = 0; a < 3; a++) {
> >> +// The metadata structure stores this in RGB order, but the
> >> SEI
> >> +// wants it in GBR order.
> >> +int b = (a + 1) % 3;
> >>
> >
> > this is a pretty minor comment, but do you think you could use the more
> > legible way present in other parts of the codebase?
> > const int mapping[3] = {2, 0, 1};
> > rather than (a + 1) % 3;
>
> Ok.
>
> Is there a specific reason to make it on the stack rather than static?  I
> see it's there in hevcdec.
>

No particular reason, I just find it more readable, if you think it's a
really bad practice then you could keep the code as is.
Thanks
-- 
Vittorio
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Vittorio Giovara
On Mon, Feb 24, 2020 at 5:07 PM Thilo Borgmann 
wrote:

> Am 24.02.20 um 22:41 schrieb Lou Logan:
> > On Mon, Feb 24, 2020, at 3:37 AM, Anton Khirnov wrote:
> >> It fundamentally depends on an API that has been deprecated for five
> >> years, has seen no commits since that time and is of highly dubious
> >> usefulness.
> >> ---
> >>  doc/filters.texi|  32 ---
> >>  libavfilter/Makefile|   1 -
> >>  libavfilter/allfilters.c|   1 -
> >>  libavfilter/vf_qp.c | 183 
> >>  tests/fate/filter-video.mak |   7 +-
> >>  tests/ref/fate/filter-pp2   |   1 -
> >>  tests/ref/fate/filter-pp3   |   1 -
> >>  7 files changed, 1 insertion(+), 225 deletions(-)
> >>  delete mode 100644 libavfilter/vf_qp.c
> >>  delete mode 100644 tests/ref/fate/filter-pp2
> >>  delete mode 100644 tests/ref/fate/filter-pp3
> >
> > Fine with me. I've never seen it used by anyone.
>
> I'm not fine with it. Declaring it's {use | use case} not existent is no
> arguments whatsoever in reality.
>
> Also, removing some functionality needs an argument - it is not keeping
> some functionality needs an argument.
>
> Nobody technically elaborates Paul's statement that it should go into side
> data. WTF? The compromise isn't even considered?
>
> Let's dig some trenches, shall we?
>
> And how come some obvious "use cases" / "needs" like [1] come into play?
> Or do we declare not continued discussions non-existent now, too?
>
> And how comes, if Michael's investigation, that all of this is based on
> use of _a function_ that is deprecated instead of direct access of
> AVFrame's fields is the cause of all of this?
>
> Shame on all of us.
>

If I may add my two cents, I feel like we are overreacting a bit and we
should take a step back.

It comes to no surprise that I do not agree that being so user-complacent
is beneficial to the overall health of the project, and that sometimes the
need to drop antiquate technologies arises. First of all this does not mean
that we're backward-removing this feature from older applications, old
ffmpeg installs will keep working. Secondly we have to accept that making
every user always happy is 100% not achievable. In general we should treat
this as an engineering problem and accept its trade-offs: how many users
will get angry that any given functionality is removed, how many will even
notice, and how beneficial it is that a feature is actually removed. And
let's not forget each functionality has a cost, not much for code overhead
but maintenance and review too: most people in this thread had to spend
time (the most precious resource of all!) to analyze the problem, find
alternatives and argue about this topic, while it could probably have been
spent doing better things.

For the case at hand, removing a filter that is using a deprecated
functionality seems perfectly fine, it's has happened in the past and will
definitely happen in the future. If the definitive need arises that a
filter is absolutely needed for these very old files, and users can't just
use an older ffmpeg install, then I'm sure some version a
correctly-implemented filter will magically appear on the mailing list.
For a more general picture, I hope the project will not take such a
conservative stance against removal and deprecation.

After all, we're in 2020 and not using floppy disks just fine :)
-- 
Vittorio
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] lavc/qsvenc: add return check for ff_qsv_map_pixfmt

2020-02-24 Thread Linjie Fu
Return an error directly if pixfmt is not supported for encoding, otherwise
it may be hidden until query/check in MSDK.

Signed-off-by: Linjie Fu 
---
 libavcodec/qsvenc.c | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/libavcodec/qsvenc.c b/libavcodec/qsvenc.c
index 571a711..40ff17c 100644
--- a/libavcodec/qsvenc.c
+++ b/libavcodec/qsvenc.c
@@ -436,7 +436,9 @@ static int init_video_param_jpeg(AVCodecContext *avctx, 
QSVEncContext *q)
 if (!desc)
 return AVERROR_BUG;
 
-ff_qsv_map_pixfmt(sw_format, &q->param.mfx.FrameInfo.FourCC);
+ret = ff_qsv_map_pixfmt(sw_format, &q->param.mfx.FrameInfo.FourCC);
+if (ret < 0)
+return AVERROR_BUG;
 
 q->param.mfx.FrameInfo.CropX  = 0;
 q->param.mfx.FrameInfo.CropY  = 0;
@@ -537,7 +539,9 @@ static int init_video_param(AVCodecContext *avctx, 
QSVEncContext *q)
 if (!desc)
 return AVERROR_BUG;
 
-ff_qsv_map_pixfmt(sw_format, &q->param.mfx.FrameInfo.FourCC);
+ret = ff_qsv_map_pixfmt(sw_format, &q->param.mfx.FrameInfo.FourCC);
+if (ret < 0)
+return AVERROR_BUG;
 
 q->param.mfx.FrameInfo.CropX  = 0;
 q->param.mfx.FrameInfo.CropY  = 0;
-- 
2.7.4

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 2/2] lavc/qsvenc: add encode support for HEVC 4:2:2 8-bit and 10-bit

2020-02-24 Thread Linjie Fu
Enables HEVC Range Extension encoding support for 4:2:2 8/10 bit
on ICL+ (gen11 +) platform with VMEPAK.

Signed-off-by: Linjie Fu 
---
 libavcodec/qsv.c | 2 ++
 libavcodec/qsvenc.c  | 4 +++-
 libavcodec/qsvenc_hevc.c | 1 +
 3 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/libavcodec/qsv.c b/libavcodec/qsv.c
index 23504b5..8c0ba31 100644
--- a/libavcodec/qsv.c
+++ b/libavcodec/qsv.c
@@ -216,10 +216,12 @@ int ff_qsv_map_pixfmt(enum AVPixelFormat format, uint32_t 
*fourcc)
 *fourcc = MFX_FOURCC_P010;
 return AV_PIX_FMT_P010;
 case AV_PIX_FMT_YUV422P:
+case AV_PIX_FMT_YUYV422:
 *fourcc = MFX_FOURCC_YUY2;
 return AV_PIX_FMT_YUYV422;
 #if QSV_VERSION_ATLEAST(1, 27)
 case AV_PIX_FMT_YUV422P10:
+case AV_PIX_FMT_Y210:
 *fourcc = MFX_FOURCC_Y210;
 return AV_PIX_FMT_Y210;
 #endif
diff --git a/libavcodec/qsvenc.c b/libavcodec/qsvenc.c
index 52b4e43..571a711 100644
--- a/libavcodec/qsvenc.c
+++ b/libavcodec/qsvenc.c
@@ -66,6 +66,7 @@ static const struct {
 { MFX_PROFILE_HEVC_MAIN,"main"  },
 { MFX_PROFILE_HEVC_MAIN10,  "main10"},
 { MFX_PROFILE_HEVC_MAINSP,  "mainsp"},
+{ MFX_PROFILE_HEVC_REXT,"rext"  },
 #endif
 };
 
@@ -544,7 +545,8 @@ static int init_video_param(AVCodecContext *avctx, 
QSVEncContext *q)
 q->param.mfx.FrameInfo.CropH  = avctx->height;
 q->param.mfx.FrameInfo.AspectRatioW   = avctx->sample_aspect_ratio.num;
 q->param.mfx.FrameInfo.AspectRatioH   = avctx->sample_aspect_ratio.den;
-q->param.mfx.FrameInfo.ChromaFormat   = MFX_CHROMAFORMAT_YUV420;
+q->param.mfx.FrameInfo.ChromaFormat   = MFX_CHROMAFORMAT_YUV420 +
+!desc->log2_chroma_w + 
!desc->log2_chroma_h;
 q->param.mfx.FrameInfo.BitDepthLuma   = desc->comp[0].depth;
 q->param.mfx.FrameInfo.BitDepthChroma = desc->comp[0].depth;
 q->param.mfx.FrameInfo.Shift  = desc->comp[0].depth > 8;
diff --git a/libavcodec/qsvenc_hevc.c b/libavcodec/qsvenc_hevc.c
index 27e2232..298b575 100644
--- a/libavcodec/qsvenc_hevc.c
+++ b/libavcodec/qsvenc_hevc.c
@@ -240,6 +240,7 @@ static const AVOption options[] = {
 { "main",NULL, 0, AV_OPT_TYPE_CONST, { .i64 = MFX_PROFILE_HEVC_MAIN
}, INT_MIN, INT_MAX, VE, "profile" },
 { "main10",  NULL, 0, AV_OPT_TYPE_CONST, { .i64 = MFX_PROFILE_HEVC_MAIN10  
}, INT_MIN, INT_MAX, VE, "profile" },
 { "mainsp",  NULL, 0, AV_OPT_TYPE_CONST, { .i64 = MFX_PROFILE_HEVC_MAINSP  
}, INT_MIN, INT_MAX, VE, "profile" },
+{ "rext",NULL, 0, AV_OPT_TYPE_CONST, { .i64 = MFX_PROFILE_HEVC_REXT
}, INT_MIN, INT_MAX, VE, "profile" },
 
 { "gpb", "1: GPB (generalized P/B frame); 0: regular P frame", 
OFFSET(qsv.gpb), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, VE},
 
-- 
2.7.4

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 1/2] lavc/qsvdec: add decode support for HEVC 4:2:2 8-bit and 10-bit

2020-02-24 Thread Linjie Fu
Enables HEVC Range Extension decoding support for 4:2:2 8/10 bit
on ICL+ (gen11 +) platform.

Signed-off-by: Linjie Fu 
---
 libavcodec/qsv.c  | 12 
 libavutil/hwcontext_qsv.c | 22 ++
 2 files changed, 34 insertions(+)

diff --git a/libavcodec/qsv.c b/libavcodec/qsv.c
index db98c75..23504b5 100644
--- a/libavcodec/qsv.c
+++ b/libavcodec/qsv.c
@@ -195,6 +195,10 @@ enum AVPixelFormat ff_qsv_map_fourcc(uint32_t fourcc)
 case MFX_FOURCC_NV12: return AV_PIX_FMT_NV12;
 case MFX_FOURCC_P010: return AV_PIX_FMT_P010;
 case MFX_FOURCC_P8:   return AV_PIX_FMT_PAL8;
+case MFX_FOURCC_YUY2: return AV_PIX_FMT_YUYV422;
+#if QSV_VERSION_ATLEAST(1, 27)
+case MFX_FOURCC_Y210: return AV_PIX_FMT_Y210;
+#endif
 }
 return AV_PIX_FMT_NONE;
 }
@@ -211,6 +215,14 @@ int ff_qsv_map_pixfmt(enum AVPixelFormat format, uint32_t 
*fourcc)
 case AV_PIX_FMT_P010:
 *fourcc = MFX_FOURCC_P010;
 return AV_PIX_FMT_P010;
+case AV_PIX_FMT_YUV422P:
+*fourcc = MFX_FOURCC_YUY2;
+return AV_PIX_FMT_YUYV422;
+#if QSV_VERSION_ATLEAST(1, 27)
+case AV_PIX_FMT_YUV422P10:
+*fourcc = MFX_FOURCC_Y210;
+return AV_PIX_FMT_Y210;
+#endif
 default:
 return AVERROR(ENOSYS);
 }
diff --git a/libavutil/hwcontext_qsv.c b/libavutil/hwcontext_qsv.c
index b1b6740..854dd3c 100644
--- a/libavutil/hwcontext_qsv.c
+++ b/libavutil/hwcontext_qsv.c
@@ -44,6 +44,10 @@
 #include "pixdesc.h"
 #include "time.h"
 
+#define QSV_VERSION_ATLEAST(MAJOR, MINOR)   \
+(MFX_VERSION_MAJOR > (MAJOR) || \
+ MFX_VERSION_MAJOR == (MAJOR) && MFX_VERSION_MINOR >= (MINOR))
+
 typedef struct QSVDevicePriv {
 AVBufferRef *child_device_ctx;
 } QSVDevicePriv;
@@ -103,6 +107,12 @@ static const struct {
 { AV_PIX_FMT_BGRA, MFX_FOURCC_RGB4 },
 { AV_PIX_FMT_P010, MFX_FOURCC_P010 },
 { AV_PIX_FMT_PAL8, MFX_FOURCC_P8   },
+{ AV_PIX_FMT_YUYV422,
+   MFX_FOURCC_YUY2 },
+#if QSV_VERSION_ATLEAST(1, 27)
+{ AV_PIX_FMT_Y210,
+   MFX_FOURCC_Y210 },
+#endif
 };
 
 static uint32_t qsv_fourcc_from_pix_fmt(enum AVPixelFormat pix_fmt)
@@ -774,6 +784,18 @@ static int map_frame_to_surface(const AVFrame *frame, 
mfxFrameSurface1 *surface)
 surface->Data.A = frame->data[0] + 3;
 break;
 
+case AV_PIX_FMT_YUYV422:
+surface->Data.Y = frame->data[0];
+surface->Data.U = frame->data[0] + 1;
+surface->Data.V = frame->data[0] + 3;
+break;
+
+case AV_PIX_FMT_Y210:
+surface->Data.Y16 = frame->data[0];
+surface->Data.U16 = frame->data[0] + 2;
+surface->Data.V16 = frame->data[0] + 6;
+break;
+
 default:
 return MFX_ERR_UNSUPPORTED;
 }
-- 
2.7.4

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Hendrik Leppkes
On Mon, Feb 24, 2020 at 11:08 PM Thilo Borgmann  wrote:
> And how comes, if Michael's investigation, that all of this is based on use 
> of _a function_ that is deprecated instead of direct access of AVFrame's 
> fields is the cause of all of this?
>

The entire functionality is deprecated, that the functions in use here
were deprecated in another commit than the fields they internally
access does not change that.

- Hendrik
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v4 03/21] cbs: Describe allocate/free methods in tabular form

2020-02-24 Thread Mark Thompson
On 24/02/2020 22:42, Andreas Rheinhardt wrote:
> Mark Thompson:
>> On 24/02/2020 22:10, Andreas Rheinhardt wrote:
>>> Mark Thompson:
 On 24/02/2020 17:19, Andreas Rheinhardt wrote:
> Mark Thompson:
>> Unit types are split into three categories, depending on how their
>> content is managed:
>> * POD structure - these require no special treatment.
>> * Structure containing references to refcounted buffers - these can use
>>   a common free function when the offsets of all the internal references
>>   are known.
>> * More complex structures - these still require ad-hoc treatment.
>>
>> For each codec we can then maintain a table of descriptors for each set 
>> of
>> equivalent unit types, defining the mechanism needed to allocate/free 
>> that
>> unit content.  This is not required to be used immediately - a new alloc
>> function supports this, but does not replace the old one which works 
>> without
>> referring to these tables.
>> ---
>>  libavcodec/cbs.c  | 69 +++
>>  libavcodec/cbs.h  |  9 +
>>  libavcodec/cbs_internal.h | 60 ++
>>  3 files changed, 138 insertions(+)
>>
>> ...
>> +typedef struct CodedBitstreamUnitTypeDescriptor {
>> ...
>> +} CodedBitstreamUnitTypeDescriptor;
>
> I wonder whether you should add const to the typedef in order to omit
> it everywhere else. After all, no CodedBitstreamUnitTypeDescriptor
> will ever be assembled during runtime.

 It definitely makes sense to add it to reduce errors.  Not so sure about 
 the removing it from everywhere else - the fact that it looks wrong at the 
 point of use probably causes more confusion.

 So, I've done the first part but not the second (helpfully, redundant type 
 qualifiers have no effect).
>>>
>>> MSVC emits a warning (or just a note or so) for this.
>> Urgh.  Is that definitely intended or is it a bug in the compiler?  The C 
>> standard is very clear that this is fine (C11 6.7.3).
>>
> This is also the way it is in C99, but given that [1] says that it
> leads to an error with MSVC in ANSI-C mode (which means C90), I looked
> at C90 and found:
> "The same type qualifier shall not appear more than once in the same
> specifier list or qualifier list, either directly or via one or more
> typedefs."

Ok, that's fatal to this plan.  I think we're better with the consts on the 
individual cases (less confusing, if slightly less efficient), so I've reverted 
back to the original.

Thanks,

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v4 03/21] cbs: Describe allocate/free methods in tabular form

2020-02-24 Thread Andreas Rheinhardt
Mark Thompson:
> On 24/02/2020 22:10, Andreas Rheinhardt wrote:
>> Mark Thompson:
>>> On 24/02/2020 17:19, Andreas Rheinhardt wrote:
 Mark Thompson:
> Unit types are split into three categories, depending on how their
> content is managed:
> * POD structure - these require no special treatment.
> * Structure containing references to refcounted buffers - these can use
>   a common free function when the offsets of all the internal references
>   are known.
> * More complex structures - these still require ad-hoc treatment.
>
> For each codec we can then maintain a table of descriptors for each set of
> equivalent unit types, defining the mechanism needed to allocate/free that
> unit content.  This is not required to be used immediately - a new alloc
> function supports this, but does not replace the old one which works 
> without
> referring to these tables.
> ---
>  libavcodec/cbs.c  | 69 +++
>  libavcodec/cbs.h  |  9 +
>  libavcodec/cbs_internal.h | 60 ++
>  3 files changed, 138 insertions(+)
>
> ...
> diff --git a/libavcodec/cbs_internal.h b/libavcodec/cbs_internal.h
> index 4c5a535ca6..615f514a85 100644
> --- a/libavcodec/cbs_internal.h
> +++ b/libavcodec/cbs_internal.h
> @@ -25,11 +25,71 @@
>  #include "put_bits.h"
>  
>  
> +enum {
> +// Unit content is a simple structure.
> +CBS_CONTENT_TYPE_POD,
> +// Unit content contains some references to other structures, but all
> +// managed via buffer reference counting.  The descriptor defines the
> +// structure offsets of every buffer reference.
> +CBS_CONTENT_TYPE_INTERNAL_REFS,
> +// Unit content is something more complex.  The descriptor defines
> +// special functions to manage the content.
> +CBS_CONTENT_TYPE_COMPLEX,
> +};
> +
> +enum {
> +  // Maximum number of unit types described by the same unit type
> +  // descriptor.
> +  CBS_MAX_UNIT_TYPES = 16,

 This is quite big and I wonder whether it would not be better to
 simply split the HEVC-slices into two range descriptors in order to
 reduce this to three.
>>>
>>> As-written, the range case is only covering the one actual range case 
>>> (MPEG-2 slices).  I think it's preferable to leave the HEVC slice headers 
>>> as-is, because having the full list there explicitly is much clearer.
>>>
>>> As an alternative, would you prefer the array here to be a pointer + 
>>> compound literal array?  It would avoid this constant and save few bytes of 
>>> space on average, at the cost of the definitions being slightly more 
>>> complex.
>>>
>> No.
>>
> +  // Maximum number of reference buffer offsets in any one unit.
> +  CBS_MAX_REF_OFFSETS = 2,
> +  // Special value used in a unit type descriptor to indicate that it
> +  // applies to a large range of types rather than a set of discrete
> +  // values.
> +  CBS_UNIT_TYPE_RANGE = -1,
> +};
> +
> +typedef struct CodedBitstreamUnitTypeDescriptor {
> +// Number of entries in the unit_types array, or the special value
> +// CBS_UNIT_TYPE_RANGE to indicate that the range fields should be
> +// used instead.
> +int nb_unit_types;
> +
> +// Array of unit types that this entry describes.
> +const CodedBitstreamUnitType unit_types[CBS_MAX_UNIT_TYPES];
> +
> +// Start and end of unit type range, used if nb_unit_types == 0.

 nb_unit_types == 0 is actually used for the sentinel in the
 CodedBitstreamUnitTypeDescriptor-array. nb_unit_types ==
 CBS_UNIT_TYPE_RANGE indicates that this descriptor uses ranges.
>>>
>>> Fixed.
>>>
> +const CodedBitstreamUnitType unit_type_range_start;
> +const CodedBitstreamUnitType unit_type_range_end;

 The ranges could be free (size-wise) if you used a union with unit_types.
>>>
>>> Anonymous unions are still not allowed in FFmpeg, unfortunately (they are 
>>> C11, though many compilers supported them before that).
>>>
>> What about a non-anonymous union? It would only be used in
>> cbs_find_unit_type_desc().
> 
> Mildly against because it would be ugly to bunch together the unrelated 
> fields, but I could be pushed into it.
> 
> +
> +// The type of content described, from CBS_CONTENT_TYPE_*.
> +intcontent_type;

 Maybe make a proper type out of the CBS_CONTENT_TYPE_*-enum and use it
 here?
>>>
>>> That's a good idea; done.
>>>
> +// The size of the structure which should be allocated to contain
> +// the decomposed content of this type of unit.
> +size_t content_size;
> +
> +// Number of entries in the ref_offsets array.  Only used if the
> +// content_type is CBS_CONT

Re: [FFmpeg-devel] [PATCH v4 03/21] cbs: Describe allocate/free methods in tabular form

2020-02-24 Thread Mark Thompson
On 24/02/2020 22:10, Andreas Rheinhardt wrote:
> Mark Thompson:
>> On 24/02/2020 17:19, Andreas Rheinhardt wrote:
>>> Mark Thompson:
 Unit types are split into three categories, depending on how their
 content is managed:
 * POD structure - these require no special treatment.
 * Structure containing references to refcounted buffers - these can use
   a common free function when the offsets of all the internal references
   are known.
 * More complex structures - these still require ad-hoc treatment.

 For each codec we can then maintain a table of descriptors for each set of
 equivalent unit types, defining the mechanism needed to allocate/free that
 unit content.  This is not required to be used immediately - a new alloc
 function supports this, but does not replace the old one which works 
 without
 referring to these tables.
 ---
  libavcodec/cbs.c  | 69 +++
  libavcodec/cbs.h  |  9 +
  libavcodec/cbs_internal.h | 60 ++
  3 files changed, 138 insertions(+)

 ...
 diff --git a/libavcodec/cbs_internal.h b/libavcodec/cbs_internal.h
 index 4c5a535ca6..615f514a85 100644
 --- a/libavcodec/cbs_internal.h
 +++ b/libavcodec/cbs_internal.h
 @@ -25,11 +25,71 @@
  #include "put_bits.h"
  
  
 +enum {
 +// Unit content is a simple structure.
 +CBS_CONTENT_TYPE_POD,
 +// Unit content contains some references to other structures, but all
 +// managed via buffer reference counting.  The descriptor defines the
 +// structure offsets of every buffer reference.
 +CBS_CONTENT_TYPE_INTERNAL_REFS,
 +// Unit content is something more complex.  The descriptor defines
 +// special functions to manage the content.
 +CBS_CONTENT_TYPE_COMPLEX,
 +};
 +
 +enum {
 +  // Maximum number of unit types described by the same unit type
 +  // descriptor.
 +  CBS_MAX_UNIT_TYPES = 16,
>>>
>>> This is quite big and I wonder whether it would not be better to
>>> simply split the HEVC-slices into two range descriptors in order to
>>> reduce this to three.
>>
>> As-written, the range case is only covering the one actual range case 
>> (MPEG-2 slices).  I think it's preferable to leave the HEVC slice headers 
>> as-is, because having the full list there explicitly is much clearer.
>>
>> As an alternative, would you prefer the array here to be a pointer + 
>> compound literal array?  It would avoid this constant and save few bytes of 
>> space on average, at the cost of the definitions being slightly more complex.
>>
> No.
> 
 +  // Maximum number of reference buffer offsets in any one unit.
 +  CBS_MAX_REF_OFFSETS = 2,
 +  // Special value used in a unit type descriptor to indicate that it
 +  // applies to a large range of types rather than a set of discrete
 +  // values.
 +  CBS_UNIT_TYPE_RANGE = -1,
 +};
 +
 +typedef struct CodedBitstreamUnitTypeDescriptor {
 +// Number of entries in the unit_types array, or the special value
 +// CBS_UNIT_TYPE_RANGE to indicate that the range fields should be
 +// used instead.
 +int nb_unit_types;
 +
 +// Array of unit types that this entry describes.
 +const CodedBitstreamUnitType unit_types[CBS_MAX_UNIT_TYPES];
 +
 +// Start and end of unit type range, used if nb_unit_types == 0.
>>>
>>> nb_unit_types == 0 is actually used for the sentinel in the
>>> CodedBitstreamUnitTypeDescriptor-array. nb_unit_types ==
>>> CBS_UNIT_TYPE_RANGE indicates that this descriptor uses ranges.
>>
>> Fixed.
>>
 +const CodedBitstreamUnitType unit_type_range_start;
 +const CodedBitstreamUnitType unit_type_range_end;
>>>
>>> The ranges could be free (size-wise) if you used a union with unit_types.
>>
>> Anonymous unions are still not allowed in FFmpeg, unfortunately (they are 
>> C11, though many compilers supported them before that).
>>
> What about a non-anonymous union? It would only be used in
> cbs_find_unit_type_desc().

Mildly against because it would be ugly to bunch together the unrelated fields, 
but I could be pushed into it.

 +
 +// The type of content described, from CBS_CONTENT_TYPE_*.
 +intcontent_type;
>>>
>>> Maybe make a proper type out of the CBS_CONTENT_TYPE_*-enum and use it
>>> here?
>>
>> That's a good idea; done.
>>
 +// The size of the structure which should be allocated to contain
 +// the decomposed content of this type of unit.
 +size_t content_size;
 +
 +// Number of entries in the ref_offsets array.  Only used if the
 +// content_type is CBS_CONTENT_TYPE_INTERNAL_REFS.
 +int nb_ref_offsets;
 +// The structure must contain two adjacent elements:
 +//   type  

Re: [FFmpeg-devel] [PATCH v4 18/21] cbs_h265: Add functions to turn HDR metadata into SEI

2020-02-24 Thread Mark Thompson
On 24/02/2020 21:28, Vittorio Giovara wrote:
> On Sun, Feb 23, 2020 at 6:41 PM Mark Thompson  wrote:
> 
>> ---
>>  libavcodec/Makefile   |  2 +-
>>  libavcodec/cbs_h265.c | 99 +++
>>  libavcodec/cbs_h265.h | 18 
>>  3 files changed, 118 insertions(+), 1 deletion(-)
>>  create mode 100644 libavcodec/cbs_h265.c
>>
>> ...
>> +void
>> ff_cbs_h265_fill_sei_mastering_display(H265RawSEIMasteringDisplayColourVolume
>> *mdcv,
>> +const
>> AVMasteringDisplayMetadata *mdm)
>> +{
>> +memset(mdcv, 0, sizeof(*mdcv));
>> +
>> +if (mdm->has_primaries) {
>> +// The values in the metadata structure are fractions between 0
>> and 1,
>> +// while the SEI message contains fixed-point values with an
>> increment
>> +// of 0.2.  So, scale up by 5 to convert between them.
>> +
>> +for (int a = 0; a < 3; a++) {
>> +// The metadata structure stores this in RGB order, but the
>> SEI
>> +// wants it in GBR order.
>> +int b = (a + 1) % 3;
>>
> 
> this is a pretty minor comment, but do you think you could use the more
> legible way present in other parts of the codebase?
> const int mapping[3] = {2, 0, 1};
> rather than (a + 1) % 3;

Ok.

Is there a specific reason to make it on the stack rather than static?  I see 
it's there in hevcdec.

>> +mdcv->display_primaries_x[a] =
>> +rescale_fraction(mdm->display_primaries[b][0], 5);
>> +mdcv->display_primaries_y[a] =
>> +rescale_fraction(mdm->display_primaries[b][1], 5);
>> +}
>> +
>> +mdcv->white_point_x = rescale_fraction(mdm->white_point[0],
>> 5);
>> +mdcv->white_point_y = rescale_fraction(mdm->white_point[1],
>> 5);
>> +}
>> +
>> +if (mdm->has_luminance) {
>> +// Metadata are rational values in candelas per square metre, SEI
>> +// contains fixed point in units of 0.0001 candelas per square
>> +// metre.  So scale up by 1 to convert between them, and clip
>> to
>> +// ensure that we don't overflow.
>> +
>> +mdcv->max_display_mastering_luminance =
>> +rescale_clip(mdm->max_luminance, 1, UINT32_MAX);
>> +mdcv->min_display_mastering_luminance =
>> +rescale_clip(mdm->min_luminance, 1, UINT32_MAX);
>> +
>> +// The spec has a hard requirement that min is less than the max,
>> +// and the SEI-writing code enforces that.
>> +if (!(mdcv->min_display_mastering_luminance <
>> +  mdcv->max_display_mastering_luminance)) {
>> +if (mdcv->max_display_mastering_luminance == UINT32_MAX)
>> +mdcv->min_display_mastering_luminance =
>> +mdcv->max_display_mastering_luminance - 1;
>> +else
>> +mdcv->max_display_mastering_luminance =
>> +mdcv->min_display_mastering_luminance + 1;
>> +}
>> +} else {
>> +mdcv->max_display_mastering_luminance = 1;
>> +mdcv->min_display_mastering_luminance = 0;
>> +}
>> +}
>> ...

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v4 03/21] cbs: Describe allocate/free methods in tabular form

2020-02-24 Thread Andreas Rheinhardt
Mark Thompson:
> On 24/02/2020 17:19, Andreas Rheinhardt wrote:
>> Mark Thompson:
>>> Unit types are split into three categories, depending on how their
>>> content is managed:
>>> * POD structure - these require no special treatment.
>>> * Structure containing references to refcounted buffers - these can use
>>>   a common free function when the offsets of all the internal references
>>>   are known.
>>> * More complex structures - these still require ad-hoc treatment.
>>>
>>> For each codec we can then maintain a table of descriptors for each set of
>>> equivalent unit types, defining the mechanism needed to allocate/free that
>>> unit content.  This is not required to be used immediately - a new alloc
>>> function supports this, but does not replace the old one which works without
>>> referring to these tables.
>>> ---
>>>  libavcodec/cbs.c  | 69 +++
>>>  libavcodec/cbs.h  |  9 +
>>>  libavcodec/cbs_internal.h | 60 ++
>>>  3 files changed, 138 insertions(+)
>>>
>>> ...
>>> diff --git a/libavcodec/cbs_internal.h b/libavcodec/cbs_internal.h
>>> index 4c5a535ca6..615f514a85 100644
>>> --- a/libavcodec/cbs_internal.h
>>> +++ b/libavcodec/cbs_internal.h
>>> @@ -25,11 +25,71 @@
>>>  #include "put_bits.h"
>>>  
>>>  
>>> +enum {
>>> +// Unit content is a simple structure.
>>> +CBS_CONTENT_TYPE_POD,
>>> +// Unit content contains some references to other structures, but all
>>> +// managed via buffer reference counting.  The descriptor defines the
>>> +// structure offsets of every buffer reference.
>>> +CBS_CONTENT_TYPE_INTERNAL_REFS,
>>> +// Unit content is something more complex.  The descriptor defines
>>> +// special functions to manage the content.
>>> +CBS_CONTENT_TYPE_COMPLEX,
>>> +};
>>> +
>>> +enum {
>>> +  // Maximum number of unit types described by the same unit type
>>> +  // descriptor.
>>> +  CBS_MAX_UNIT_TYPES = 16,
>>
>> This is quite big and I wonder whether it would not be better to
>> simply split the HEVC-slices into two range descriptors in order to
>> reduce this to three.
> 
> As-written, the range case is only covering the one actual range case (MPEG-2 
> slices).  I think it's preferable to leave the HEVC slice headers as-is, 
> because having the full list there explicitly is much clearer.
> 
> As an alternative, would you prefer the array here to be a pointer + compound 
> literal array?  It would avoid this constant and save few bytes of space on 
> average, at the cost of the definitions being slightly more complex.
> 
No.

>>> +  // Maximum number of reference buffer offsets in any one unit.
>>> +  CBS_MAX_REF_OFFSETS = 2,
>>> +  // Special value used in a unit type descriptor to indicate that it
>>> +  // applies to a large range of types rather than a set of discrete
>>> +  // values.
>>> +  CBS_UNIT_TYPE_RANGE = -1,
>>> +};
>>> +
>>> +typedef struct CodedBitstreamUnitTypeDescriptor {
>>> +// Number of entries in the unit_types array, or the special value
>>> +// CBS_UNIT_TYPE_RANGE to indicate that the range fields should be
>>> +// used instead.
>>> +int nb_unit_types;
>>> +
>>> +// Array of unit types that this entry describes.
>>> +const CodedBitstreamUnitType unit_types[CBS_MAX_UNIT_TYPES];
>>> +
>>> +// Start and end of unit type range, used if nb_unit_types == 0.
>>
>> nb_unit_types == 0 is actually used for the sentinel in the
>> CodedBitstreamUnitTypeDescriptor-array. nb_unit_types ==
>> CBS_UNIT_TYPE_RANGE indicates that this descriptor uses ranges.
> 
> Fixed.
> 
>>> +const CodedBitstreamUnitType unit_type_range_start;
>>> +const CodedBitstreamUnitType unit_type_range_end;
>>
>> The ranges could be free (size-wise) if you used a union with unit_types.
> 
> Anonymous unions are still not allowed in FFmpeg, unfortunately (they are 
> C11, though many compilers supported them before that).
> 
What about a non-anonymous union? It would only be used in
cbs_find_unit_type_desc().
>>> +
>>> +// The type of content described, from CBS_CONTENT_TYPE_*.
>>> +intcontent_type;
>>
>> Maybe make a proper type out of the CBS_CONTENT_TYPE_*-enum and use it
>> here?
> 
> That's a good idea; done.
> 
>>> +// The size of the structure which should be allocated to contain
>>> +// the decomposed content of this type of unit.
>>> +size_t content_size;
>>> +
>>> +// Number of entries in the ref_offsets array.  Only used if the
>>> +// content_type is CBS_CONTENT_TYPE_INTERNAL_REFS.
>>> +int nb_ref_offsets;
>>> +// The structure must contain two adjacent elements:
>>> +//   type*field;
>>> +//   AVBufferRef *field_ref;
>>> +// where field points to something in the buffer referred to by
>>> +// field_ref.  This offset is then set to offsetof(struct, field).
>>> +size_t ref_offsets[CBS_MAX_REF_OFFSETS];
>>> +
>>> +void (*content_f

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Thilo Borgmann
Am 24.02.20 um 22:41 schrieb Lou Logan:
> On Mon, Feb 24, 2020, at 3:37 AM, Anton Khirnov wrote:
>> It fundamentally depends on an API that has been deprecated for five
>> years, has seen no commits since that time and is of highly dubious
>> usefulness.
>> ---
>>  doc/filters.texi|  32 ---
>>  libavfilter/Makefile|   1 -
>>  libavfilter/allfilters.c|   1 -
>>  libavfilter/vf_qp.c | 183 
>>  tests/fate/filter-video.mak |   7 +-
>>  tests/ref/fate/filter-pp2   |   1 -
>>  tests/ref/fate/filter-pp3   |   1 -
>>  7 files changed, 1 insertion(+), 225 deletions(-)
>>  delete mode 100644 libavfilter/vf_qp.c
>>  delete mode 100644 tests/ref/fate/filter-pp2
>>  delete mode 100644 tests/ref/fate/filter-pp3
> 
> Fine with me. I've never seen it used by anyone.

I'm not fine with it. Declaring it's {use | use case} not existent is no 
arguments whatsoever in reality.

Also, removing some functionality needs an argument - it is not keeping some 
functionality needs an argument.

Nobody technically elaborates Paul's statement that it should go into side 
data. WTF? The compromise isn't even considered?

Let's dig some trenches, shall we?

And how come some obvious "use cases" / "needs" like [1] come into play? Or do 
we declare not continued discussions non-existent now, too?

And how comes, if Michael's investigation, that all of this is based on use of 
_a function_ that is deprecated instead of direct access of AVFrame's fields is 
the cause of all of this?

Shame on all of us.

-Thilo

[1] https://lists.ffmpeg.org/pipermail/ffmpeg-devel/2019-August/247401.html
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v4 03/21] cbs: Describe allocate/free methods in tabular form

2020-02-24 Thread Mark Thompson
On 24/02/2020 17:19, Andreas Rheinhardt wrote:
> Mark Thompson:
>> Unit types are split into three categories, depending on how their
>> content is managed:
>> * POD structure - these require no special treatment.
>> * Structure containing references to refcounted buffers - these can use
>>   a common free function when the offsets of all the internal references
>>   are known.
>> * More complex structures - these still require ad-hoc treatment.
>>
>> For each codec we can then maintain a table of descriptors for each set of
>> equivalent unit types, defining the mechanism needed to allocate/free that
>> unit content.  This is not required to be used immediately - a new alloc
>> function supports this, but does not replace the old one which works without
>> referring to these tables.
>> ---
>>  libavcodec/cbs.c  | 69 +++
>>  libavcodec/cbs.h  |  9 +
>>  libavcodec/cbs_internal.h | 60 ++
>>  3 files changed, 138 insertions(+)
>>
>> ...
>> diff --git a/libavcodec/cbs_internal.h b/libavcodec/cbs_internal.h
>> index 4c5a535ca6..615f514a85 100644
>> --- a/libavcodec/cbs_internal.h
>> +++ b/libavcodec/cbs_internal.h
>> @@ -25,11 +25,71 @@
>>  #include "put_bits.h"
>>  
>>  
>> +enum {
>> +// Unit content is a simple structure.
>> +CBS_CONTENT_TYPE_POD,
>> +// Unit content contains some references to other structures, but all
>> +// managed via buffer reference counting.  The descriptor defines the
>> +// structure offsets of every buffer reference.
>> +CBS_CONTENT_TYPE_INTERNAL_REFS,
>> +// Unit content is something more complex.  The descriptor defines
>> +// special functions to manage the content.
>> +CBS_CONTENT_TYPE_COMPLEX,
>> +};
>> +
>> +enum {
>> +  // Maximum number of unit types described by the same unit type
>> +  // descriptor.
>> +  CBS_MAX_UNIT_TYPES = 16,
> 
> This is quite big and I wonder whether it would not be better to
> simply split the HEVC-slices into two range descriptors in order to
> reduce this to three.

As-written, the range case is only covering the one actual range case (MPEG-2 
slices).  I think it's preferable to leave the HEVC slice headers as-is, 
because having the full list there explicitly is much clearer.

As an alternative, would you prefer the array here to be a pointer + compound 
literal array?  It would avoid this constant and save few bytes of space on 
average, at the cost of the definitions being slightly more complex.

>> +  // Maximum number of reference buffer offsets in any one unit.
>> +  CBS_MAX_REF_OFFSETS = 2,
>> +  // Special value used in a unit type descriptor to indicate that it
>> +  // applies to a large range of types rather than a set of discrete
>> +  // values.
>> +  CBS_UNIT_TYPE_RANGE = -1,
>> +};
>> +
>> +typedef struct CodedBitstreamUnitTypeDescriptor {
>> +// Number of entries in the unit_types array, or the special value
>> +// CBS_UNIT_TYPE_RANGE to indicate that the range fields should be
>> +// used instead.
>> +int nb_unit_types;
>> +
>> +// Array of unit types that this entry describes.
>> +const CodedBitstreamUnitType unit_types[CBS_MAX_UNIT_TYPES];
>> +
>> +// Start and end of unit type range, used if nb_unit_types == 0.
> 
> nb_unit_types == 0 is actually used for the sentinel in the
> CodedBitstreamUnitTypeDescriptor-array. nb_unit_types ==
> CBS_UNIT_TYPE_RANGE indicates that this descriptor uses ranges.

Fixed.

>> +const CodedBitstreamUnitType unit_type_range_start;
>> +const CodedBitstreamUnitType unit_type_range_end;
> 
> The ranges could be free (size-wise) if you used a union with unit_types.

Anonymous unions are still not allowed in FFmpeg, unfortunately (they are C11, 
though many compilers supported them before that).

>> +
>> +// The type of content described, from CBS_CONTENT_TYPE_*.
>> +intcontent_type;
> 
> Maybe make a proper type out of the CBS_CONTENT_TYPE_*-enum and use it
> here?

That's a good idea; done.

>> +// The size of the structure which should be allocated to contain
>> +// the decomposed content of this type of unit.
>> +size_t content_size;
>> +
>> +// Number of entries in the ref_offsets array.  Only used if the
>> +// content_type is CBS_CONTENT_TYPE_INTERNAL_REFS.
>> +int nb_ref_offsets;
>> +// The structure must contain two adjacent elements:
>> +//   type*field;
>> +//   AVBufferRef *field_ref;
>> +// where field points to something in the buffer referred to by
>> +// field_ref.  This offset is then set to offsetof(struct, field).
>> +size_t ref_offsets[CBS_MAX_REF_OFFSETS];
>> +
>> +void (*content_free)(void *opaque, uint8_t *data);
> 
> Is there a usecase for a dedicated free-function different for a unit
> of type CBS_CONTENT_TYPE_INTERNAL_REFS? If not, then one could use a
> union for this and the ref_offset stuff.

Yes, but the 

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-24 Thread Marton Balint



On Mon, 24 Feb 2020, Nicolas George wrote:


Michael Niedermayer (12020-02-24):

No, they can't: being the same subtitle or not is part of the semantic.



Does anyone else share this oppinion ?

iam asking because we need to resolve such differences of oppinion to
move forward.
Theres no way to design an API if such relativly fundamental things
have disagreements on them


It's not a matter of opinion, it is actually quite obvious:

# 1
# 00:00:10,000 --> 00:00:11,000
# Hello.
#
# 2
# 00:00:11,000 --> 00:00:12,000
# Hello.

… means that two people said Hello in quick succession while:


That is not the real issue (although the normally used techniques to 
signal different speakers is coloring, alignment or simply putting both 
sentences in a single subtitle).


The real issue is that for animations like \move{} the rendering cannot be 
splitted. So it seems if we want to support animations, hard splitting is 
not an option.




Some subtitles have overlap all over the place. I am thinking in
particular of some animé fansub, with on-screen signs and onomatopoeia
translated and cultural notes, all along with dialogue. De-overlapping
would increase their size considerably, and cause actual dialogue to be
split, which results in the problems I have explained above.

But I don't know why you are so focussed on this. Overlapping is not a
problem, it's just something to keep in mind while designing the API,
like the fact that bitmap subtitles have several rectangles. It's
actually quite easy to handle.


My problem with overlapping is that in order to render subtitles 
at a given time you need more than one AVSubtitle. That is a 
fundamental difference to audio or video AVFrames where a single object 
fully represents the media at a given time.


Maybe we should deal with collections of AVSubtitles which affect time 
durations, this way you don't need to hard-merge the subtitle rectangles 
but still can reference objects which fully describe subtitles for a 
time period.


Regards,
Marton
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v4 08/21] h264_redundant_pps: Make it reference-compatible

2020-02-24 Thread Mark Thompson
On 24/02/2020 16:07, Andreas Rheinhardt wrote:
> Mark Thompson:
>> From: Andreas Rheinhardt 
>>
>> Since c6a63e11092c975b89d824f08682fe31948d3686, the parameter sets
>> modified as content of PPS units were references shared with the
>> CodedBitstreamH264Context, so modifying them alters the parsing process
>> of future access units which meant that frames often got discarded
>> because invalid values were parsed. This patch makes h264_redundant_pps
>> compatible with the reality of reference-counted parameter sets.
>>
>> Signed-off-by: Andreas Rheinhardt 
>> Signed-off-by: Mark Thompson 
>> ---
> 
> You can now add ticket #7807 to the commit message.

Added.

Thanks,

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v4 16/21] cbs_h264: Add a function to turn stereo 3D metadata into SEI

2020-02-24 Thread Mark Thompson
On 24/02/2020 01:57, Andreas Rheinhardt wrote:
> Mark Thompson:
>> ---
>>  libavcodec/cbs_h264.c | 47 +++
>>  libavcodec/cbs_h264.h |  8 
>>  2 files changed, 55 insertions(+)
>>
>> diff --git a/libavcodec/cbs_h264.c b/libavcodec/cbs_h264.c
>> index 75759c7f25..cc52f68550 100644
>> --- a/libavcodec/cbs_h264.c
>> +++ b/libavcodec/cbs_h264.c
>> @@ -16,6 +16,8 @@
>>   * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 
>> USA
>>   */
>>  
>> +#include "libavutil/stereo3d.h"
>> +
>>  #include "cbs_h264.h"
>>  
>>  int ff_cbs_h264_add_sei_message(CodedBitstreamContext *ctx,
>> @@ -104,3 +106,48 @@ void 
>> ff_cbs_h264_delete_sei_message(CodedBitstreamContext *ctx,
>>  (sei->payload_count - position) * sizeof(*sei->payload));
>>  }
>>  }
>> +
>> +void 
>> ff_cbs_h264_fill_sei_frame_packing_arrangement(H264RawSEIFramePackingArrangement
>>  *fp,
>> +const AVStereo3D *st)
>> +{
>> +static const int type_map[] = {
> 
> Why not uint8_t instead of int? After all,
> frame_packing_arrangement_type is an uint8_t.

Yep, changed.

>> +[AV_STEREO3D_2D]  = 6,
>> +[AV_STEREO3D_SIDEBYSIDE]  = 3,
>> +[AV_STEREO3D_TOPBOTTOM]   = 4,
>> +[AV_STEREO3D_FRAMESEQUENCE]   = 5,
>> +[AV_STEREO3D_CHECKERBOARD]= 0,
>> +[AV_STEREO3D_SIDEBYSIDE_QUINCUNX] = 3,
>> +[AV_STEREO3D_LINES]   = 2,
>> +[AV_STEREO3D_COLUMNS] = 1,
>> +};
>> +
>> +memset(fp, 0, sizeof(*fp));
>> +
>> +if (st->type >= FF_ARRAY_ELEMS(type_map))
>> +return;
>> +
>> +fp->frame_packing_arrangement_type = type_map[st->type];
>> +
>> +fp->quincunx_sampling_flag =
>> +st->type == AV_STEREO3D_CHECKERBOARD ||
>> +st->type == AV_STEREO3D_SIDEBYSIDE_QUINCUNX;
>> +
>> +if (st->type == AV_STEREO3D_2D)
>> +fp->content_interpretation_type = 0;
>> +else if (st->flags & AV_STEREO3D_FLAG_INVERT)
>> +fp->content_interpretation_type = 2;
>> +else
>> +fp->content_interpretation_type = 1;
>> +
>> +if (st->type == AV_STEREO3D_FRAMESEQUENCE) {
>> +if (st->flags & AV_STEREO3D_FLAG_INVERT)
>> +fp->current_frame_is_frame0_flag =
>> +st->view == AV_STEREO3D_VIEW_RIGHT;
>> +else
>> +fp->current_frame_is_frame0_flag =
>> +st->view == AV_STEREO3D_VIEW_LEFT;
>> +}
>> +
>> +fp->frame_packing_arrangement_repetition_period =
>> +st->type != AV_STEREO3D_FRAMESEQUENCE;
>> +}
>> diff --git a/libavcodec/cbs_h264.h b/libavcodec/cbs_h264.h
>> index 512674ec07..76211c976b 100644
>> --- a/libavcodec/cbs_h264.h
>> +++ b/libavcodec/cbs_h264.h
>> @@ -525,4 +525,12 @@ void 
>> ff_cbs_h264_delete_sei_message(CodedBitstreamContext *ctx,
>>  CodedBitstreamUnit *nal_unit,
>>  int position);
>>  
>> +struct AVStereo3D;
>> +/**
>> + * Fill an SEI Frame Packing Arrangement structure with values derived from
>> + * the AVStereo3D side-data structure.
>> + */
>> +void 
>> ff_cbs_h264_fill_sei_frame_packing_arrangement(H264RawSEIFramePackingArrangement
>>  *fp,
>> +const struct AVStereo3D 
>> *st);
>> +
>>  #endif /* AVCODEC_CBS_H264_H */
>>

Thanks,

- Mark

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v4 06/21] cbs: Add support functions for handling unit content references

2020-02-24 Thread Mark Thompson
On 24/02/2020 01:55, Andreas Rheinhardt wrote:
> Mark Thompson:
>> Use the unit type table to determine what we need to do to clone the
>> internals of the unit content when making copies for refcounting or
>> writeability.  (This will still fail for units with complex content
>> if they do not have a defined clone function.)
>>
>> Setup and naming from a patch by Andreas Rheinhardt
>> , but with the implementation
> 
> You may use my new email here.

Done.

>> changed to use the unit type information if possible rather than
>> requiring a codec-specific function.
>> ---
>>  libavcodec/cbs.c  | 172 ++
>>  libavcodec/cbs.h  |  29 +++
>>  libavcodec/cbs_internal.h |   1 +
>>  3 files changed, 202 insertions(+)
>>
>> diff --git a/libavcodec/cbs.c b/libavcodec/cbs.c
>> index 6cc559e545..91788f6dfb 100644
>> --- a/libavcodec/cbs.c
>> +++ b/libavcodec/cbs.c
>> @@ -881,3 +881,175 @@ int ff_cbs_alloc_unit_content2(CodedBitstreamContext 
>> *ctx,
>>  
>>  return 0;
>>  }
>> +
>> +static int cbs_clone_unit_content(AVBufferRef **clone_ref,
>> +  CodedBitstreamUnit *unit,
>> +  const CodedBitstreamUnitTypeDescriptor 
>> *desc)
>> +{
>> +uint8_t *src, *copy;
>> +uint8_t **src_ptr, **copy_ptr;
>> +AVBufferRef **src_buf, **copy_buf;
>> +int err, i;
>> +
>> +av_assert0(unit->content);
>> +src = unit->content;
>> +
>> +copy = av_malloc(desc->content_size);
>> +if (!copy)
>> +return AVERROR(ENOMEM);
>> +
>> +memcpy(copy, src, desc->content_size);
> 
> One can use av_memdup() for malloc+memcpy.

Yep, changed.

>> +
>> +for (i = 0; i < desc->nb_ref_offsets; i++) {
>> +src_ptr  = (uint8_t**)(src + desc->ref_offsets[i]);
>> +src_buf  = (AVBufferRef**)(src_ptr + 1);
>> +copy_ptr = (uint8_t**)(copy + desc->ref_offsets[i]);
>> +copy_buf = (AVBufferRef**)(copy_ptr + 1);
> 
> In patch #2 you intend to make the AVBufferRef * directly follow the
> pointer so that the above works. This has two drawbacks: It probably
> works on all systems we care about, but it is not spec-compliant as a
> compiler may add padding between these two elements. And furthermore
> it forces you to make these ugly casts above.

You would have to have a very weird ABI to cause problems here, so I'm not too 
worried - we already disallow differently-sized pointer types (see av_freep()), 
so it would have to be some extremely bizarre struct packing rule.

> How about the following approach: The *data pointer and the *data_ref
> pointer will always be put into a dedicated structure (maybe even with
> the size field?) ChildBuf (better name welcome) or whatever and the
> descriptor contains the offset of these ChildBufs. Here is a scetch:
> 
> struct ChildBuf {
> uint8_t *data;
> AVBufferRef *data_ref;
> } ChildBuf;
> 
>   const ChildBuf *src_child = (const ChildBuf *)(src +
> desc->child_offsets[i]);
>   ChildBuf *copy_child = (ChildBuf *)(copy + desc->ref_offsets[i]);
> 
>   if (!src_child->data) {
> av_assert0(!src_child->data_ref);
> continue;
>   }
> ...
>   copy_child->data_ref = av_buffer_ref(src_child->data_ref);
> 
> I admit it would probably involve more writing (unless you can come up
> with a really short name).

Adding the extra layer in the name is annoyingly inelegant for both the 
declaration (extra structure) and the use (extra layer of indirection in the 
name).  Given that, I'm inclined to stay with the current approach without a 
stronger reason to change, because it isolates the ugliness to this one 
function.

I was thinking of adding a unit test which would go through all of the 
descriptor tables and structures to make sure that the offsets in the entries 
actually match.  Would that reassure you that the result is ok?

>> +
>> +if (!*src_ptr) {
>> +av_assert0(!*src_buf);
>> +continue;
>> +}
>> +if (!*src_buf) {
>> +// We can't handle a non-refcounted pointer here - we don't
>> +// have enough information to handle whatever structure lies
>> +// at the other end of it.
>> +err = AVERROR(EINVAL);
>> +goto fail;
>> +}
>> +
>> +// src_ptr is required to point somewhere inside src_buf.  If it
>> +// doesn't, there is a bug somewhere.
>> +av_assert0(*src_ptr >= (*src_buf)->data &&
>> +   *src_ptr <  (*src_buf)->data + (*src_buf)->size);
>> +
>> +*copy_buf = av_buffer_ref(*src_buf);
>> +if (!*copy_buf) {
>> +err = AVERROR(ENOMEM);
>> +goto fail;
>> +}
>> +
>> +err = av_buffer_make_writable(copy_buf);
> 
> Making the child buf writable is neither necessary for fixing the
> original problem, because h264_redundant_pps does not modifiy the
> child buffer; nor is there any benefit to it: If unit->content_ref is
> already wr

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Lou Logan
On Mon, Feb 24, 2020, at 3:37 AM, Anton Khirnov wrote:
> It fundamentally depends on an API that has been deprecated for five
> years, has seen no commits since that time and is of highly dubious
> usefulness.
> ---
>  doc/filters.texi|  32 ---
>  libavfilter/Makefile|   1 -
>  libavfilter/allfilters.c|   1 -
>  libavfilter/vf_qp.c | 183 
>  tests/fate/filter-video.mak |   7 +-
>  tests/ref/fate/filter-pp2   |   1 -
>  tests/ref/fate/filter-pp3   |   1 -
>  7 files changed, 1 insertion(+), 225 deletions(-)
>  delete mode 100644 libavfilter/vf_qp.c
>  delete mode 100644 tests/ref/fate/filter-pp2
>  delete mode 100644 tests/ref/fate/filter-pp3

Fine with me. I've never seen it used by anyone.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v4 18/21] cbs_h265: Add functions to turn HDR metadata into SEI

2020-02-24 Thread Vittorio Giovara
On Sun, Feb 23, 2020 at 6:41 PM Mark Thompson  wrote:

> ---
>  libavcodec/Makefile   |  2 +-
>  libavcodec/cbs_h265.c | 99 +++
>  libavcodec/cbs_h265.h | 18 
>  3 files changed, 118 insertions(+), 1 deletion(-)
>  create mode 100644 libavcodec/cbs_h265.c
>
> diff --git a/libavcodec/Makefile b/libavcodec/Makefile
> index 0c4547f3a1..1ce079687b 100644
> --- a/libavcodec/Makefile
> +++ b/libavcodec/Makefile
> @@ -65,7 +65,7 @@ OBJS-$(CONFIG_CABAC)   += cabac.o
>  OBJS-$(CONFIG_CBS) += cbs.o
>  OBJS-$(CONFIG_CBS_AV1) += cbs_av1.o
>  OBJS-$(CONFIG_CBS_H264)+= cbs_h2645.o cbs_h264.o
> h2645_parse.o
> -OBJS-$(CONFIG_CBS_H265)+= cbs_h2645.o h2645_parse.o
> +OBJS-$(CONFIG_CBS_H265)+= cbs_h2645.o cbs_h265.o
> h2645_parse.o
>  OBJS-$(CONFIG_CBS_JPEG)+= cbs_jpeg.o
>  OBJS-$(CONFIG_CBS_MPEG2)   += cbs_mpeg2.o
>  OBJS-$(CONFIG_CBS_VP9) += cbs_vp9.o
> diff --git a/libavcodec/cbs_h265.c b/libavcodec/cbs_h265.c
> new file mode 100644
> index 00..590977cf00
> --- /dev/null
> +++ b/libavcodec/cbs_h265.c
> @@ -0,0 +1,99 @@
> +/*
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
> 02110-1301 USA
> + */
> +
> +#include "libavutil/mathematics.h"
> +#include "libavutil/mastering_display_metadata.h"
> +
> +#include "cbs_h265.h"
> +
> +
> +static uint32_t rescale_clip(AVRational value, uint32_t scale, uint32_t
> max)
> +{
> +int64_t scaled = av_rescale(scale, value.num, value.den);
> +return av_clip64(scaled, 0, max);
> +}
> +
> +static uint32_t rescale_fraction(AVRational value, uint32_t max)
> +{
> +return rescale_clip(value, max, max);
> +}
> +
> +void
> ff_cbs_h265_fill_sei_mastering_display(H265RawSEIMasteringDisplayColourVolume
> *mdcv,
> +const
> AVMasteringDisplayMetadata *mdm)
> +{
> +memset(mdcv, 0, sizeof(*mdcv));
> +
> +if (mdm->has_primaries) {
> +// The values in the metadata structure are fractions between 0
> and 1,
> +// while the SEI message contains fixed-point values with an
> increment
> +// of 0.2.  So, scale up by 5 to convert between them.
> +
> +for (int a = 0; a < 3; a++) {
> +// The metadata structure stores this in RGB order, but the
> SEI
> +// wants it in GBR order.
> +int b = (a + 1) % 3;
>

this is a pretty minor comment, but do you think you could use the more
legible way present in other parts of the codebase?
const int mapping[3] = {2, 0, 1};
rather than (a + 1) % 3;

Vittorio

+mdcv->display_primaries_x[a] =
> +rescale_fraction(mdm->display_primaries[b][0], 5);
> +mdcv->display_primaries_y[a] =
> +rescale_fraction(mdm->display_primaries[b][1], 5);
> +}
> +
> +mdcv->white_point_x = rescale_fraction(mdm->white_point[0],
> 5);
> +mdcv->white_point_y = rescale_fraction(mdm->white_point[1],
> 5);
> +}
> +
> +if (mdm->has_luminance) {
> +// Metadata are rational values in candelas per square metre, SEI
> +// contains fixed point in units of 0.0001 candelas per square
> +// metre.  So scale up by 1 to convert between them, and clip
> to
> +// ensure that we don't overflow.
> +
> +mdcv->max_display_mastering_luminance =
> +rescale_clip(mdm->max_luminance, 1, UINT32_MAX);
> +mdcv->min_display_mastering_luminance =
> +rescale_clip(mdm->min_luminance, 1, UINT32_MAX);
> +
> +// The spec has a hard requirement that min is less than the max,
> +// and the SEI-writing code enforces that.
> +if (!(mdcv->min_display_mastering_luminance <
> +  mdcv->max_display_mastering_luminance)) {
> +if (mdcv->max_display_mastering_luminance == UINT32_MAX)
> +mdcv->min_display_mastering_luminance =
> +mdcv->max_display_mastering_luminance - 1;
> +else
> +mdcv->max_display_mastering_luminance =
> +mdcv->min_display_mastering_luminance + 1;
> +   

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-24 Thread Matt Zagrabelny
On Sat, Feb 22, 2020 at 2:47 AM Clément Bœsch  wrote:
>
> On Fri, Feb 14, 2020 at 03:26:30AM +, Soft Works wrote:
> > Hi,
> >
>
> Hi,
>
> > I am looking for some guidance regarding future plans about processing 
> > subtitle streams in filter graphs.
> >
> > Please correct me where I'm wrong - this is the situation as I've 
> > understood it so far:
> [...]
>
> Your analysis was pretty much on point. I've been away from FFmpeg development
> from around the time of that patchset. While I can't recommend a course of
> action, I can elaborate on what was blocking and missing. Beware that this is
> reconstructed from my unreliable memory and I may forget important points.
>
> Last state can be found at 
> https://github.com/ubitux/FFmpeg/tree/subtitles-new-api
>
> The last WIP commit includes a TODO.txt which I'm sharing here for the
> record:
>
> > TODO:
> > - heartbeat mechanism
> > - drop sub2video (needs heartbeat)
> > - properly deal with -ss and -t (need strim filter?)
> > - sub_start_display/sub_end_display needs to be honored
> > - find a test case for dvbsub as it's likely broken (ffmpeg.c hack is
> >   removed and should be replaced by a EAGAIN logic in lavc/utils.c)
> > - make it pass FATE:
> >   * fix cc/subcc
> >   * broke various other stuff
> > - Changelog/APIchanges
> > - proper API doxy
> > - update lavfi/subtitles?
> > - merge [avs]null filters
> > - filters doc
> > - avcodec_default_get_buffer2?
> > - how to transfer subtitle header down to libavfilter?
>
> The biggest TODO entry right now is the heartbeat mechanism which is required
> for being able to drop the sub2video hack. You've seen that discussed in the
> thread.
>
> Thing is, that branch is already a relatively invasive and may include
> controversial API change. Typically, the way I decided to handle subtitle
> text/rectangle allocation within AVSubtitle is "different" but I couldn't come
> up with a better solution. Basically, we have to fit them in AVFrame for a
> clean integration within FFmpeg ecosystem, but subtitles are not simple 
> buffers
> like audio and video can be: they have to be backed by more complex dynamic
> structures.
>
> Also unfortunately, addressing the problem through an iterative process is
> extremely difficult in the current situation due to historical technical debt.
> You may have noticed that the decode and encode subtitles API are a few
> generations behind the audio and video ones. The reason it wasn't modernized
> earlier was because it was already a pita in the past.
>
> The subtitles refactor requires to see the big picture and all the problems at
> once. Since the core change (subtitles in AVFrame) requires the introduction 
> of
> a new subtitles structure and API, it also involve addressing the shortcomings
> of the original API (or maybe we could tolerate a new API that actually looks
> like the old?). So even if we ignore the subtitle-in-avframe thing, we don't
> have a clear answer for a sane API that handles everything. Here is a
> non-exhaustive list of stuff that we have to take into account while thinking
> about that:
>
> - text subtitles with and without markup
> - sparsity, overlapping
> - different semantics for duration (duration available, no known duration,
>   event-based clearing, ...)
> - closed captions / teletext
> - bitmap subtitles and their potential colorspaces (each rectangle as an
>   AVFrame is way overkill but technically that's exactly what it is)
>
> This should give you a hint on why the task has been quite overwhelming.
> Subtitles were the reason I initially came into the multimedia world, and they
> might have played a role in why I distanced myself from it.
>
> That said, I'd say the main reason it was put in stand by was because I was
> kind of alone in that struggle. While I got a lot of support from people, I
> think the main help I needed would have been formalizing the API we wanted.
> Like, code and API gymnastic is not that much of a problem, but deciding on
> what to do, and what path we take to reach that point is/was the core issue.
>
> And to be honest, I never really made up my mind on abandoning the work. So 
> I'm
> calling it again: if someone is interested in addressing the problem once and
> for all, I can spend some time rebasing the current state and clarifying what 
> has
> been said in this mail in the details so we can work together on an API
> contract we want between FFmpeg and our users. When we have this, I think
> progress can be made again.


Nicolas and Clément, et. al,

Is financial support at all blocking progress in subtitle filters?

I'm afraid I don't have much ffmpeg coding expertise to contribute,
but I am interested in seeing better subtitle support in ffmpeg and am
looking to help where I can.

Let us know if there is anything else that non-coders could help with.

Thanks,

-m
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-deve

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-24 Thread Nicolas George
Michael Niedermayer (12020-02-24):
> > No, they can't: being the same subtitle or not is part of the semantic.

> Does anyone else share this oppinion ?
> 
> iam asking because we need to resolve such differences of oppinion to
> move forward.
> Theres no way to design an API if such relativly fundamental things
> have disagreements on them

It's not a matter of opinion, it is actually quite obvious:

# 1
# 00:00:10,000 --> 00:00:11,000
# Hello.
# 
# 2
# 00:00:11,000 --> 00:00:12,000
# Hello.

… means that two people said Hello in quick succession while:

# 1
# 00:00:10,000 --> 00:00:12,000
# Hello.

… means that Hello was said only once, slowly.

And it has practical consequences: Clément suggested a voice synthesis
filter, that would change its output.

Some subtitles have overlap all over the place. I am thinking in
particular of some animé fansub, with on-screen signs and onomatopoeia
translated and cultural notes, all along with dialogue. De-overlapping
would increase their size considerably, and cause actual dialogue to be
split, which results in the problems I have explained above.

But I don't know why you are so focussed on this. Overlapping is not a
problem, it's just something to keep in mind while designing the API,
like the fact that bitmap subtitles have several rectangles. It's
actually quite easy to handle.

Regards,

-- 
  Nicolas George


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Status and Plans for Subtitle Filters

2020-02-24 Thread Michael Niedermayer
On Mon, Feb 24, 2020 at 12:17:37AM +0100, Nicolas George wrote:
> Marton Balint (12020-02-23):
> > Two overlapping subtitles can be broken into 3 non-overlapping subtitles,
> 
> No, they can't: being the same subtitle or not is part of the semantic.

Does anyone else share this oppinion ?

iam asking because we need to resolve such differences of oppinion to
move forward.
Theres no way to design an API if such relativly fundamental things
have disagreements on them

Thanks

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Any man who breaks a law that conscience tells him is unjust and willingly 
accepts the penalty by staying in jail in order to arouse the conscience of 
the community on the injustice of the law is at that moment expressing the 
very highest respect for law. - Martin Luther King Jr


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Michael Niedermayer
On Mon, Feb 24, 2020 at 03:54:45PM +0100, Anton Khirnov wrote:
> Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
> > Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov 
> > :
> > >
> > > It fundamentally depends on an API that has been deprecated for five
> > > years, has seen no commits since that time and is of highly dubious
> > > usefulness.
> > 
> > Please explain how the removed functionality was replaced.
> 
> It was not, for the reasons mentioned in the commit message. 

> In my view,
> the fact that nobody fixed it in all that time proves that nobody cares
> about this functionality and thus that there is no value in keeping it.

your reasoning only works if there is a problem that requires a fix.

Your reasoning here seems
BIG problem in A && noone fixes it -> noone cares about A

My view is
whoever sees a problem in A (i do not really) should fix it.

Maybe iam missing something and there is in fact a big problem in the
code. But if that is the case iam not aware of the problem and thats
why i did nothing for years "fixing" it. Its not that i do not care.

So what is really the issue here ?
if i build vf_qp.o i get
./libavutil/frame.h:719:1: note: 'av_frame_get_qp_table' has been explicitly 
marked deprecated here
attribute_deprecated

./libavutil/frame.h:721:1: note: 'av_frame_set_qp_table' has been explicitly 
marked deprecated here
attribute_deprecated

if i look at git history these where deprecated in
commit 7df37dd319f2d9d3e1becd5d433884e3ccfa1ee2
Author: James Almer 
Date:   Mon Oct 23 11:10:48 2017 -0300

avutil/frame: deprecate getters and setters for AVFrame fields

The fields can be accessed directly, so these are not needed anymore.

This says the field can be accessed directly, so certainly its not
deprecated in favor of the side data API.

and in fact av_frame_get_qp_table / av_frame_set_qp_table do use the 
side data API already so none of this makes sense really.
And the whole argument about five years also isnt correct as
october 2017 is not 5 years ago


> 
> Furthermore, I believe this filter (and all the associated
> "postprocessing" ones) are anachronistic relics of the DivX era. They
> were in fashion around ~2005 (though I doubt they were actually
> improving anything even then) but nobody with a clue has used them since
> H.264 took over.

well, for old videos (which still exist today) and i mean the stuff
that used 8x8 dct based codecs mpeg1 to mpeg4, divx, msmpeg4, realvideo
also jpeg and that use not very high bitrate. (very high bitrate of course
doesnt have much to improve toward)

There is a quite noticable quality improvment when using postprocessing
with the right parameters both subjective and objective (PSNR IIRC)
And at the rare but not noneexisting occurance where i do want to watch
such a video i always use one of these filters.
In realty that has often been the spp filter but thats probably not
important.
In general if you can see 8x8 blocks without the filter, these filters
will make the video simply look better.

if passing QP helps for the above usecase probably depends on how
variable QP is in the video one wants to watch or if a single fixed
hand tuned QP works well (it often does indeed)

Another usecase for passing QP was lossless re-encoding.
I do not know how common this has been used (iam not using it and its not
my idea originally), this of course also requires a encoder which
can accept motion vectors and MB types on input or intra only

Yet another use case is maintaining the input encoders choices
for quantization / quality when converting to another format.
in principle one could even have one encoder provide quantization
information to a second encoder

-> encoder1
   /   v
raw input QP
   \   v
-> encoder2
   
why? i dont know, maybe for art or fun, duplicate some bad QP choices or good
QP choices, or edit QP choices ina specific area.

but i would not call the ability to pass the QP array around and
to modify it useless.

Also last but not least, if you think there really is an issue that
MUST be fixed otherwise the code must be removed. Why not ask the 
people listed in authors & copyright to look into it ?
Iam listed in the copyright it seems and unless i forgot it noone
asked me to fix some major issue in vf_qp

Thanks


[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Those who are best at talking, realize last or never when they are wrong.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Paul B Mahol
On 2/24/20, Anton Khirnov  wrote:
> Quoting Paul B Mahol (2020-02-24 18:07:26)
>> On 2/24/20, Anton Khirnov  wrote:
>> > Quoting Paul B Mahol (2020-02-24 17:02:52)
>> >> On 2/24/20, James Almer  wrote:
>> >> > On Monday, February 24, 2020, Carl Eugen Hoyos 
>> >> > wrote:
>> >> >>
>> >> >>
>> >> >>
>> >> >>> Am 24.02.2020 um 15:54 schrieb Anton Khirnov :
>> >> >>>
>> >> >>> Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
>> >> > Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov <
>> >> > an...@khirnov.net>:
>> >> >
>> >> > It fundamentally depends on an API that has been deprecated for
>> >> > five
>> >> > years, has seen no commits since that time and is of highly
>> >> > dubious
>> >> > usefulness.
>> >> 
>> >>  Please explain how the removed functionality was replaced.
>> >> >>>
>> >> >>> It was not, for the reasons mentioned in the commit message. In my
>> >> >>> view,
>> >> >>> the fact that nobody fixed it in all that time proves that nobody
>> >> >>> cares
>> >> >>> about this functionality and thus that there is no value in keeping
>> >> >>> it.
>> >> >>
>> >> >> In this case your patch set is not acceptable: I strongly suggest
>> >> >> you
>> >> > work on something that improves FFmpeg instead of removing features.
>> >> >>
>> >> >> Carl Eugen
>> >> >
>> >> > Anton argued why it should be removed. You should do the same about
>> >> > why
>> >> > it
>> >> > should not. Simply saying you are against removing features other
>> >> > developers consider useless is not enough.
>> >>
>> >> Filter as is was simply never marked for deprecation, same applies for
>> >> removed features to other filters in this set.
>> >
>> > So what? It produced deprecation warnings on every build for five years.
>> >
>> > Are you claiming you have a use case for it? Or know about someone who
>> > does?
>>
>> I believe there are still usecases for this filter and other filters.
>
> Elaborate please. What use cases? Actual or theoretical?
>
>>
>> What about other filters and other deprecation warnings?
>> Are filters gonna be removed because of single deprecation warning in
>> file?
>
> This is sophistry, the filter is not being dropped because of a minor
> deprecation warning in it. The fundamental functionality which it is
> built around is to be removed.
>
>>
>> I think it was mistake to set qp side data as deprecated right after
>> its addition.
>
> This is not an an accurate description of what happened. Exporting QP
> tables wasn't deprecated at that point. Rather the preexisting
> functionality for exporting QP tables (as plain points to avcodec
> internal buffers) was converted to newly added side data API to keep
> things working for a while and see if anyone wants to keep this. Five
> years passed and nobody did. Therefore it should be removed.
>
>>
>> It is hurting our reputation when users look how we removed items
>> after few years
>> of usage or when we deprecate items right in same commit that added them.
>
> I believe it hurts our reputation a lot more when our feature list reads
> like state of the art from 2002, but necessary infrastructure
> maintenance cannot be done because of the burden of all these
> "features".
>
> Users hate us a lot more for confusing inconsistent poorly documented
> APIs which are hard to use correctly than for deprecating obsolete
> filters.

There is lot of hate here, so I'm refrain from posting further.
Do as you and technical committee wish. I'm out of the game.

>
> --
> Anton Khirnov
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] Update HDR10+ metadata structure.

2020-02-24 Thread Vittorio Giovara
On Sat, Feb 22, 2020 at 12:44 PM Mohammad Izadi  wrote:

> On Fri, Feb 21, 2020, 6:44 PM Vittorio Giovara  >
> wrote:
>
> > On Fri, Feb 21, 2020 at 5:17 PM Mohammad Izadi 
> > wrote:
> >
> > > Why does the struct belong to lavu? This struct is super similar to
> > structs
> > > in libavcodec/hevc_sei.h. We just move it to a new file to share it
> > between
> > > hevc and vp9 encoder/decoder.
> > >
> > > --
> > >
> >
> > 1. Please kindly stop top posting:
> http://www.idallen.com/topposting.html
> > 2. It belongs to lavu because it's where the frame code generically code
> > is. I'm not familiar with this API too much, but from what i gather users
> > may need to have a way of accessing this data without pulling in all the
> > dependencies of lavc or lavf.
> >
> This struct is related to parsing and SEI, not frame. If so, why other
> structs are not in lavu? Please check similar structs in hevc_sei?
>

I don't think I understand your question, but if you need examples you can
check these patches
8f58ecc344a92e63193c38e28c173be987954bbb structure defined in lavu,
e7a6f8c972a0b5b98ef7bbf393e95c434e9e2539 structure populated in lavc
d91718107c33960ad295950d7419e6dba292d723 structure defined in lavu, used in
lavc
7e244c68600f479270e979258e389ed5240885fb same
and so on and so on, so I'd advise you do to the same, scrapping your
current code if necessary.
-- 
Vittorio
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Anton Khirnov
Quoting Paul B Mahol (2020-02-24 18:07:26)
> On 2/24/20, Anton Khirnov  wrote:
> > Quoting Paul B Mahol (2020-02-24 17:02:52)
> >> On 2/24/20, James Almer  wrote:
> >> > On Monday, February 24, 2020, Carl Eugen Hoyos 
> >> > wrote:
> >> >>
> >> >>
> >> >>
> >> >>> Am 24.02.2020 um 15:54 schrieb Anton Khirnov :
> >> >>>
> >> >>> Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
> >> > Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov <
> >> > an...@khirnov.net>:
> >> >
> >> > It fundamentally depends on an API that has been deprecated for five
> >> > years, has seen no commits since that time and is of highly dubious
> >> > usefulness.
> >> 
> >>  Please explain how the removed functionality was replaced.
> >> >>>
> >> >>> It was not, for the reasons mentioned in the commit message. In my
> >> >>> view,
> >> >>> the fact that nobody fixed it in all that time proves that nobody
> >> >>> cares
> >> >>> about this functionality and thus that there is no value in keeping
> >> >>> it.
> >> >>
> >> >> In this case your patch set is not acceptable: I strongly suggest you
> >> > work on something that improves FFmpeg instead of removing features.
> >> >>
> >> >> Carl Eugen
> >> >
> >> > Anton argued why it should be removed. You should do the same about why
> >> > it
> >> > should not. Simply saying you are against removing features other
> >> > developers consider useless is not enough.
> >>
> >> Filter as is was simply never marked for deprecation, same applies for
> >> removed features to other filters in this set.
> >
> > So what? It produced deprecation warnings on every build for five years.
> >
> > Are you claiming you have a use case for it? Or know about someone who
> > does?
> 
> I believe there are still usecases for this filter and other filters.

Elaborate please. What use cases? Actual or theoretical?

> 
> What about other filters and other deprecation warnings?
> Are filters gonna be removed because of single deprecation warning in file?

This is sophistry, the filter is not being dropped because of a minor
deprecation warning in it. The fundamental functionality which it is
built around is to be removed.

> 
> I think it was mistake to set qp side data as deprecated right after
> its addition.

This is not an an accurate description of what happened. Exporting QP
tables wasn't deprecated at that point. Rather the preexisting
functionality for exporting QP tables (as plain points to avcodec
internal buffers) was converted to newly added side data API to keep
things working for a while and see if anyone wants to keep this. Five
years passed and nobody did. Therefore it should be removed.

> 
> It is hurting our reputation when users look how we removed items
> after few years
> of usage or when we deprecate items right in same commit that added them.

I believe it hurts our reputation a lot more when our feature list reads
like state of the art from 2002, but necessary infrastructure
maintenance cannot be done because of the burden of all these
"features".

Users hate us a lot more for confusing inconsistent poorly documented
APIs which are hard to use correctly than for deprecating obsolete
filters.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] GSoC 2020

2020-02-24 Thread Thilo Borgmann
Hi,

>>> please help fill the 2020 GSoC Ideas page
>>> https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2020
>>>
>>> (This page is key to being acccepted to GSoC)
>>
>> I guess everybody already noticed that FFmpeg had been accepted as a
>> mentoring Org in GSoC 2020! :D
> 
> Hi,
> 
> Just to save the chance for FFmpeg, I proposed one idea under
> 'Intel® Video and Audio for Linux' to tune performance of native layer conv2d,
> see https://01.org/linuxmedia/gsoc/gsoc-2020-ideas for detail.
> 
> I'll continue if no one objects. 

that's a good thing and thanks for letting us know!

-Thilo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v4 03/21] cbs: Describe allocate/free methods in tabular form

2020-02-24 Thread Andreas Rheinhardt
Mark Thompson:
> Unit types are split into three categories, depending on how their
> content is managed:
> * POD structure - these require no special treatment.
> * Structure containing references to refcounted buffers - these can use
>   a common free function when the offsets of all the internal references
>   are known.
> * More complex structures - these still require ad-hoc treatment.
> 
> For each codec we can then maintain a table of descriptors for each set of
> equivalent unit types, defining the mechanism needed to allocate/free that
> unit content.  This is not required to be used immediately - a new alloc
> function supports this, but does not replace the old one which works without
> referring to these tables.
> ---
>  libavcodec/cbs.c  | 69 +++
>  libavcodec/cbs.h  |  9 +
>  libavcodec/cbs_internal.h | 60 ++
>  3 files changed, 138 insertions(+)
> 
> diff --git a/libavcodec/cbs.c b/libavcodec/cbs.c
> index 0bd5e1ac5d..6cc559e545 100644
> --- a/libavcodec/cbs.c
> +++ b/libavcodec/cbs.c
> @@ -812,3 +812,72 @@ void ff_cbs_delete_unit(CodedBitstreamContext *ctx,
>  frag->units + position + 1,
>  (frag->nb_units - position) * sizeof(*frag->units));
>  }
> +
> +static void cbs_default_free_unit_content(void *opaque, uint8_t *data)
> +{
> +const CodedBitstreamUnitTypeDescriptor *desc = opaque;
> +if (desc->content_type == CBS_CONTENT_TYPE_INTERNAL_REFS) {
> +int i;
> +for (i = 0; i < desc->nb_ref_offsets; i++) {
> +void **ptr = (void**)(data + desc->ref_offsets[i]);
> +av_buffer_unref((AVBufferRef**)(ptr + 1));
> +}
> +}
> +av_free(data);
> +}
> +
> +static const CodedBitstreamUnitTypeDescriptor
> +*cbs_find_unit_type_desc(CodedBitstreamContext *ctx,
> + CodedBitstreamUnit *unit)
> +{
> +const CodedBitstreamUnitTypeDescriptor *desc;
> +int i, j;
> +
> +if (!ctx->codec->unit_types)
> +return NULL;
> +
> +for (i = 0;; i++) {
> +desc = &ctx->codec->unit_types[i];
> +if (desc->nb_unit_types == 0)
> +break;
> +if (desc->nb_unit_types == CBS_UNIT_TYPE_RANGE) {
> +if (unit->type >= desc->unit_type_range_start &&
> +unit->type <= desc->unit_type_range_end)
> +return desc;
> +} else {
> +for (j = 0; j < desc->nb_unit_types; j++) {
> +if (desc->unit_types[j] == unit->type)
> +return desc;
> +}
> +}
> +}
> +return NULL;
> +}
> +
> +int ff_cbs_alloc_unit_content2(CodedBitstreamContext *ctx,
> +   CodedBitstreamUnit *unit)
> +{
> +const CodedBitstreamUnitTypeDescriptor *desc;
> +
> +av_assert0(!unit->content && !unit->content_ref);
> +
> +desc = cbs_find_unit_type_desc(ctx, unit);
> +if (!desc)
> +return AVERROR(ENOSYS);
> +
> +unit->content = av_mallocz(desc->content_size);
> +if (!unit->content)
> +return AVERROR(ENOMEM);
> +
> +unit->content_ref =
> +av_buffer_create(unit->content, desc->content_size,
> + desc->content_free ? desc->content_free
> +: cbs_default_free_unit_content,
> + (void*)desc, 0);
> +if (!unit->content_ref) {
> +av_freep(&unit->content);
> +return AVERROR(ENOMEM);
> +}
> +
> +return 0;
> +}
> diff --git a/libavcodec/cbs.h b/libavcodec/cbs.h
> index cb3081e2c6..2a5959a2b0 100644
> --- a/libavcodec/cbs.h
> +++ b/libavcodec/cbs.h
> @@ -352,6 +352,15 @@ int ff_cbs_alloc_unit_content(CodedBitstreamContext *ctx,
>size_t size,
>void (*free)(void *opaque, uint8_t *content));
>  
> +/**
> + * Allocate a new internal content buffer matching the type of the unit.
> + *
> + * The content will be zeroed.
> + */
> +int ff_cbs_alloc_unit_content2(CodedBitstreamContext *ctx,
> +   CodedBitstreamUnit *unit);
> +
> +
>  /**
>   * Allocate a new internal data buffer of the given size in the unit.
>   *
> diff --git a/libavcodec/cbs_internal.h b/libavcodec/cbs_internal.h
> index 4c5a535ca6..615f514a85 100644
> --- a/libavcodec/cbs_internal.h
> +++ b/libavcodec/cbs_internal.h
> @@ -25,11 +25,71 @@
>  #include "put_bits.h"
>  
>  
> +enum {
> +// Unit content is a simple structure.
> +CBS_CONTENT_TYPE_POD,
> +// Unit content contains some references to other structures, but all
> +// managed via buffer reference counting.  The descriptor defines the
> +// structure offsets of every buffer reference.
> +CBS_CONTENT_TYPE_INTERNAL_REFS,
> +// Unit content is something more complex.  The descriptor defines
> +// special functions to manage the content.
> +CBS_CONTENT_TYPE_COMPLEX,
> +};
> +
> +en

Re: [FFmpeg-devel] [PATCH] Removed bogus/duplicate PNG parser subsystem entry.

2020-02-24 Thread Paul B Mahol
applied

On 2/24/20, Anamitra Ghorui  wrote:
> ---
>  libavcodec/Makefile | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/libavcodec/Makefile b/libavcodec/Makefile
> index 0de585279c..f1c032b456 100644
> --- a/libavcodec/Makefile
> +++ b/libavcodec/Makefile
> @@ -1059,7 +1059,6 @@ OBJS-$(CONFIG_MLP_PARSER)  += mlp_parse.o
> mlp_parser.o mlp.o
>  OBJS-$(CONFIG_MPEG4VIDEO_PARSER)   += mpeg4video_parser.o h263.o \
>mpeg4videodec.o mpeg4video.o \
>ituh263dec.o h263dec.o h263data.o
> -OBJS-$(CONFIG_PNG_PARSER)  += png_parser.o
>  OBJS-$(CONFIG_MPEGAUDIO_PARSER)+= mpegaudio_parser.o
>  OBJS-$(CONFIG_MPEGVIDEO_PARSER)+= mpegvideo_parser.o\
>mpeg12.o mpeg12data.o
> --
> 2.17.1
>
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Paul B Mahol
On 2/24/20, Anton Khirnov  wrote:
> Quoting Paul B Mahol (2020-02-24 17:02:52)
>> On 2/24/20, James Almer  wrote:
>> > On Monday, February 24, 2020, Carl Eugen Hoyos 
>> > wrote:
>> >>
>> >>
>> >>
>> >>> Am 24.02.2020 um 15:54 schrieb Anton Khirnov :
>> >>>
>> >>> Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
>> > Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov <
>> > an...@khirnov.net>:
>> >
>> > It fundamentally depends on an API that has been deprecated for five
>> > years, has seen no commits since that time and is of highly dubious
>> > usefulness.
>> 
>>  Please explain how the removed functionality was replaced.
>> >>>
>> >>> It was not, for the reasons mentioned in the commit message. In my
>> >>> view,
>> >>> the fact that nobody fixed it in all that time proves that nobody
>> >>> cares
>> >>> about this functionality and thus that there is no value in keeping
>> >>> it.
>> >>
>> >> In this case your patch set is not acceptable: I strongly suggest you
>> > work on something that improves FFmpeg instead of removing features.
>> >>
>> >> Carl Eugen
>> >
>> > Anton argued why it should be removed. You should do the same about why
>> > it
>> > should not. Simply saying you are against removing features other
>> > developers consider useless is not enough.
>>
>> Filter as is was simply never marked for deprecation, same applies for
>> removed features to other filters in this set.
>
> So what? It produced deprecation warnings on every build for five years.
>
> Are you claiming you have a use case for it? Or know about someone who
> does?

I believe there are still usecases for this filter and other filters.

What about other filters and other deprecation warnings?
Are filters gonna be removed because of single deprecation warning in file?

I think it was mistake to set qp side data as deprecated right after
its addition.

It is hurting our reputation when users look how we removed items
after few years
of usage or when we deprecate items right in same commit that added them.

>
> --
> Anton Khirnov
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Soft Works


> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Paul B Mahol
> Sent: Monday, February 24, 2020 5:39 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp
> 
> On 2/24/20, Soft Works  wrote:
> >> -Original Message-
> >> From: ffmpeg-devel  On Behalf Of
> >> Anton Khirnov
> >> Sent: Monday, February 24, 2020 3:55 PM
> >> To: FFmpeg development discussions and patches  >> de...@ffmpeg.org>
> >> Subject: Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp
> >>
> >> Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
> >> > Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov
> >> :
> >> > >
> >> > > It fundamentally depends on an API that has been deprecated for
> >> > > five years, has seen no commits since that time and is of highly
> >> > > dubious usefulness.
> >> >
> >> > Please explain how the removed functionality was replaced.
> >>
> >> It was not, for the reasons mentioned in the commit message. In my
> >> view, the fact that nobody fixed it in all that time proves that
> >> nobody cares about this functionality and thus that there is no value
> >> in keeping it.
> >>
> >> Furthermore, I believe this filter (and all the associated
> >> "postprocessing"
> >> ones) are anachronistic relics of the DivX era. They were in fashion
> >> around
> >> ~2005 (though I doubt they were actually improving anything even
> >> then) but nobody with a clue has used them since
> >
> > Following those or similar arguments in a consequent way, would
> > quickly constitute quite a list of ffmpeg features having "no value"
> anymore.
> >
> 
> Please write such a list, I'm interested.

Excellent trap, but I won't step in ;-)

I'm not even advocating to start some kind of deprecation cycle. All I
meant to say is that _if_ there's something to be removed, it might
possibly be a good idea to have some kind of "soft-" or "2-stage-"removal
which would provide an opportunity for users to accommodate 
or complain.

softworkz


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avcodec/mpeg12dec: Do not alter avctx->rc_buffer_size

2020-02-24 Thread Gaullier Nicolas
> De : ffmpeg-devel  De la part de Hendrik 
> Leppkes
> Envoyé : lundi 24 février 2020 17:26
> À : FFmpeg development discussions and patches 
> Objet : Re: [FFmpeg-devel] [PATCH] avcodec/mpeg12dec: Do not alter 
> avctx->rc_buffer_size
> 
> On Mon, Feb 24, 2020 at 5:13 PM Nicolas Gaullier
>  wrote:
> >
> 
> rc_buffer_size doesn't really have a meaning to a decoder (its for
> encoders and muxers), so why should it not be able to change it to
> match the value it reads from the bitstream?
> 
> - Hendrik

Both because it is not helpful : a later patch will make the value available as 
side data,
and because API doc (avcodec.h) says "decoding: unused".
The other way would have been to update the doc etc., but it does not seem 
relevant here.
This comes from a previous review by James and I agreed with him not to change 
the API.
As soon as this patch is applied, I will send my updated v4 patchset "Fix 
mpeg1/2 stream copy" which will allow reading rc_buffer_size through side data.
Sorry about that, maybe I should have send all of my patches in a bunch, but 
this little patch is somewhat unrelated to my original patchset and I thought 
it was better to separate things this way.

Nicolas
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Anton Khirnov
Quoting Paul B Mahol (2020-02-24 13:56:56)
> Filter should not be removed, it should use qp via frame side data.

For what purpose? I am yet to hear about any valid use case for this
filter.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Anton Khirnov
Quoting Soft Works (2020-02-24 17:13:54)
> > -Original Message-
> > From: ffmpeg-devel  On Behalf Of
> > Anton Khirnov
> > Sent: Monday, February 24, 2020 3:55 PM
> > To: FFmpeg development discussions and patches  > de...@ffmpeg.org>
> > Subject: Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp
> > 
> > Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
> > > Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov
> > :
> > > >
> > > > It fundamentally depends on an API that has been deprecated for five
> > > > years, has seen no commits since that time and is of highly dubious
> > > > usefulness.
> > >
> > > Please explain how the removed functionality was replaced.
> > 
> > It was not, for the reasons mentioned in the commit message. In my view,
> > the fact that nobody fixed it in all that time proves that nobody cares 
> > about
> > this functionality and thus that there is no value in keeping it.
> > 
> > Furthermore, I believe this filter (and all the associated "postprocessing"
> > ones) are anachronistic relics of the DivX era. They were in fashion around
> > ~2005 (though I doubt they were actually improving anything even then) but
> > nobody with a clue has used them since
> 
> Following those or similar arguments in a consequent way, would quickly 
> constitute quite a list of ffmpeg features having "no value" anymore.

Yes, and all features with no value should be removed. They are a burden
for both developers and users. Keeping them is bad for the project and
not good for anyone.

> 
> Removing features from one day to another would appear to me as a bit
> too extreme, no matter how useless the feature might be.

It's not "from one day to another" though. This functionality has been
deprecated for five years. And in that entire time nobody ever had
enough interest to do anything about it.

> 
> Maybe it would make sense to introduce some kind of feature category
> like "legacy features" where those types of features can be 'parked'
> for a while before getting removed eventually. 

Git history is not going anywhere. If there is ever a use case for this
(which I strongly doubt, but could happen), people are free to take the
code from history and adapt it to whatever new API they come up with.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Anton Khirnov
Quoting Paul B Mahol (2020-02-24 17:02:52)
> On 2/24/20, James Almer  wrote:
> > On Monday, February 24, 2020, Carl Eugen Hoyos  wrote:
> >>
> >>
> >>
> >>> Am 24.02.2020 um 15:54 schrieb Anton Khirnov :
> >>>
> >>> Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
> > Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov <
> > an...@khirnov.net>:
> >
> > It fundamentally depends on an API that has been deprecated for five
> > years, has seen no commits since that time and is of highly dubious
> > usefulness.
> 
>  Please explain how the removed functionality was replaced.
> >>>
> >>> It was not, for the reasons mentioned in the commit message. In my view,
> >>> the fact that nobody fixed it in all that time proves that nobody cares
> >>> about this functionality and thus that there is no value in keeping it.
> >>
> >> In this case your patch set is not acceptable: I strongly suggest you
> > work on something that improves FFmpeg instead of removing features.
> >>
> >> Carl Eugen
> >
> > Anton argued why it should be removed. You should do the same about why it
> > should not. Simply saying you are against removing features other
> > developers consider useless is not enough.
> 
> Filter as is was simply never marked for deprecation, same applies for
> removed features to other filters in this set.

So what? It produced deprecation warnings on every build for five years.

Are you claiming you have a use case for it? Or know about someone who
does?

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Paul B Mahol
On 2/24/20, Soft Works  wrote:
>> -Original Message-
>> From: ffmpeg-devel  On Behalf Of
>> Anton Khirnov
>> Sent: Monday, February 24, 2020 3:55 PM
>> To: FFmpeg development discussions and patches > de...@ffmpeg.org>
>> Subject: Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp
>>
>> Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
>> > Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov
>> :
>> > >
>> > > It fundamentally depends on an API that has been deprecated for five
>> > > years, has seen no commits since that time and is of highly dubious
>> > > usefulness.
>> >
>> > Please explain how the removed functionality was replaced.
>>
>> It was not, for the reasons mentioned in the commit message. In my view,
>> the fact that nobody fixed it in all that time proves that nobody cares
>> about
>> this functionality and thus that there is no value in keeping it.
>>
>> Furthermore, I believe this filter (and all the associated
>> "postprocessing"
>> ones) are anachronistic relics of the DivX era. They were in fashion
>> around
>> ~2005 (though I doubt they were actually improving anything even then)
>> but
>> nobody with a clue has used them since
>
> Following those or similar arguments in a consequent way, would quickly
> constitute quite a list of ffmpeg features having "no value" anymore.
>

Please write such a list, I'm interested.

> Removing features from one day to another would appear to me as a bit
> too extreme, no matter how useless the feature might be.
>
> Maybe it would make sense to introduce some kind of feature category
> like "legacy features" where those types of features can be 'parked'
> for a while before getting removed eventually.
>
> (of course, allowing to configure build with or without those features).
>
> softworkz
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Soft Works
> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Anton Khirnov
> Sent: Monday, February 24, 2020 3:55 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp
> 
> Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
> > Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov
> :
> > >
> > > It fundamentally depends on an API that has been deprecated for five
> > > years, has seen no commits since that time and is of highly dubious
> > > usefulness.
> >
> > Please explain how the removed functionality was replaced.
> 
> It was not, for the reasons mentioned in the commit message. In my view,
> the fact that nobody fixed it in all that time proves that nobody cares about
> this functionality and thus that there is no value in keeping it.
> 
> Furthermore, I believe this filter (and all the associated "postprocessing"
> ones) are anachronistic relics of the DivX era. They were in fashion around
> ~2005 (though I doubt they were actually improving anything even then) but
> nobody with a clue has used them since

Following those or similar arguments in a consequent way, would quickly 
constitute quite a list of ffmpeg features having "no value" anymore.

Removing features from one day to another would appear to me as a bit
too extreme, no matter how useless the feature might be.

Maybe it would make sense to introduce some kind of feature category
like "legacy features" where those types of features can be 'parked'
for a while before getting removed eventually. 

(of course, allowing to configure build with or without those features).

softworkz

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avcodec/mpeg12dec: Do not alter avctx->rc_buffer_size

2020-02-24 Thread Hendrik Leppkes
On Mon, Feb 24, 2020 at 5:13 PM Nicolas Gaullier
 wrote:
>

rc_buffer_size doesn't really have a meaning to a decoder (its for
encoders and muxers), so why should it not be able to change it to
match the value it reads from the bitstream?

- Hendrik
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] avcodec/mpeg12dec: Do not alter avctx->rc_buffer_size

2020-02-24 Thread Nicolas Gaullier
---
 libavcodec/mpeg12dec.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 17f9495a1d..2945728edd 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -64,6 +64,7 @@ typedef struct Mpeg1Context {
 int slice_count;
 AVRational save_aspect;
 int save_width, save_height, save_progressive_seq;
+int rc_buffer_size;
 AVRational frame_rate_ext;  /* MPEG-2 specific framerate modificator */
 int sync;   /* Did we reach a sync point like a 
GOP/SEQ/KEYFrame? */
 int tmpgexs;
@@ -1417,7 +1418,7 @@ static void mpeg_decode_sequence_extension(Mpeg1Context 
*s1)
 bit_rate_ext = get_bits(&s->gb, 12);  /* XXX: handle it */
 s->bit_rate += (bit_rate_ext << 18) * 400LL;
 check_marker(s->avctx, &s->gb, "after bit rate extension");
-s->avctx->rc_buffer_size += get_bits(&s->gb, 8) * 1024 * 16 << 10;
+s1->rc_buffer_size += get_bits(&s->gb, 8) * 1024 * 16 << 10;
 
 s->low_delay = get_bits1(&s->gb);
 if (s->avctx->flags & AV_CODEC_FLAG_LOW_DELAY)
@@ -1433,7 +1434,7 @@ static void mpeg_decode_sequence_extension(Mpeg1Context 
*s1)
 av_log(s->avctx, AV_LOG_DEBUG,
"profile: %d, level: %d ps: %d cf:%d vbv buffer: %d, 
bitrate:%"PRId64"\n",
s->avctx->profile, s->avctx->level, s->progressive_sequence, 
s->chroma_format,
-   s->avctx->rc_buffer_size, s->bit_rate);
+   s1->rc_buffer_size, s->bit_rate);
 }
 
 static void mpeg_decode_sequence_display_extension(Mpeg1Context *s1)
@@ -2118,7 +2119,7 @@ static int mpeg1_decode_sequence(AVCodecContext *avctx,
 return AVERROR_INVALIDDATA;
 }
 
-s->avctx->rc_buffer_size = get_bits(&s->gb, 10) * 1024 * 16;
+s1->rc_buffer_size = get_bits(&s->gb, 10) * 1024 * 16;
 skip_bits(&s->gb, 1);
 
 /* get matrix */
@@ -2167,7 +2168,7 @@ static int mpeg1_decode_sequence(AVCodecContext *avctx,
 
 if (s->avctx->debug & FF_DEBUG_PICT_INFO)
 av_log(s->avctx, AV_LOG_DEBUG, "vbv buffer: %d, bitrate:%"PRId64", 
aspect_ratio_info: %d \n",
-   s->avctx->rc_buffer_size, s->bit_rate, s->aspect_ratio_info);
+   s1->rc_buffer_size, s->bit_rate, s->aspect_ratio_info);
 
 return 0;
 }
-- 
2.25.0.windows.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v4 08/21] h264_redundant_pps: Make it reference-compatible

2020-02-24 Thread Andreas Rheinhardt
Mark Thompson:
> From: Andreas Rheinhardt 
> 
> Since c6a63e11092c975b89d824f08682fe31948d3686, the parameter sets
> modified as content of PPS units were references shared with the
> CodedBitstreamH264Context, so modifying them alters the parsing process
> of future access units which meant that frames often got discarded
> because invalid values were parsed. This patch makes h264_redundant_pps
> compatible with the reality of reference-counted parameter sets.
> 
> Signed-off-by: Andreas Rheinhardt 
> Signed-off-by: Mark Thompson 
> ---

You can now add ticket #7807 to the commit message.

- Andreas
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Paul B Mahol
On 2/24/20, James Almer  wrote:
> On Monday, February 24, 2020, Carl Eugen Hoyos  wrote:
>>
>>
>>
>>> Am 24.02.2020 um 15:54 schrieb Anton Khirnov :
>>>
>>> Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
> Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov <
> an...@khirnov.net>:
>
> It fundamentally depends on an API that has been deprecated for five
> years, has seen no commits since that time and is of highly dubious
> usefulness.

 Please explain how the removed functionality was replaced.
>>>
>>> It was not, for the reasons mentioned in the commit message. In my view,
>>> the fact that nobody fixed it in all that time proves that nobody cares
>>> about this functionality and thus that there is no value in keeping it.
>>
>> In this case your patch set is not acceptable: I strongly suggest you
> work on something that improves FFmpeg instead of removing features.
>>
>> Carl Eugen
>
> Anton argued why it should be removed. You should do the same about why it
> should not. Simply saying you are against removing features other
> developers consider useless is not enough.

Filter as is was simply never marked for deprecation, same applies for
removed features to other filters in this set.

>
>> ___
>> ffmpeg-devel mailing list
>> ffmpeg-devel@ffmpeg.org
>> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>>
>> To unsubscribe, visit link above, or email
>> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread James Almer
On Monday, February 24, 2020, Carl Eugen Hoyos  wrote:
>
>
>
>> Am 24.02.2020 um 15:54 schrieb Anton Khirnov :
>>
>> Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
 Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov <
an...@khirnov.net>:

 It fundamentally depends on an API that has been deprecated for five
 years, has seen no commits since that time and is of highly dubious
 usefulness.
>>>
>>> Please explain how the removed functionality was replaced.
>>
>> It was not, for the reasons mentioned in the commit message. In my view,
>> the fact that nobody fixed it in all that time proves that nobody cares
>> about this functionality and thus that there is no value in keeping it.
>
> In this case your patch set is not acceptable: I strongly suggest you
work on something that improves FFmpeg instead of removing features.
>
> Carl Eugen

Anton argued why it should be removed. You should do the same about why it
should not. Simply saying you are against removing features other
developers consider useless is not enough.

> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Major bump preparations - lavu

2020-02-24 Thread Hendrik Leppkes
On Mon, Feb 24, 2020 at 2:02 PM Carl Eugen Hoyos  wrote:
>
> Am Mo., 24. Feb. 2020 um 13:39 Uhr schrieb Anton Khirnov :
>
> > we have discussed previously that we want to do a major bump soon.
> > This set starts with some preparations for that. After it, lavu should be
> > ready for the bump.
> >
> > Please comment
>
> I don't think the set is acceptable.
>

Actual technical reasons for that would make for great reading. :)

- Hendrik
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 1/4] avfilter/vf_sr.c: refine code to use AVPixFmtDescriptor.log2_chroma_h/w

2020-02-24 Thread Pedro Arthur
Em seg., 24 de fev. de 2020 às 05:50, Guo, Yejun  escreveu:
>
> Signed-off-by: Guo, Yejun 
> ---
>  libavfilter/vf_sr.c | 40 ++--
>  1 file changed, 6 insertions(+), 34 deletions(-)
>
> diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c
> index 562b030..f000eda 100644
> --- a/libavfilter/vf_sr.c
> +++ b/libavfilter/vf_sr.c
> @@ -176,40 +176,12 @@ static int config_props(AVFilterLink *inlink)
>  sr_context->sws_slice_h = inlink->h;
>  } else {
>  if (inlink->format != AV_PIX_FMT_GRAY8){
> -sws_src_h = sr_context->input.height;
> -sws_src_w = sr_context->input.width;
> -sws_dst_h = sr_context->output.height;
> -sws_dst_w = sr_context->output.width;
> -
> -switch (inlink->format){
> -case AV_PIX_FMT_YUV420P:
> -sws_src_h = AV_CEIL_RSHIFT(sws_src_h, 1);
> -sws_src_w = AV_CEIL_RSHIFT(sws_src_w, 1);
> -sws_dst_h = AV_CEIL_RSHIFT(sws_dst_h, 1);
> -sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 1);
> -break;
> -case AV_PIX_FMT_YUV422P:
> -sws_src_w = AV_CEIL_RSHIFT(sws_src_w, 1);
> -sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 1);
> -break;
> -case AV_PIX_FMT_YUV444P:
> -break;
> -case AV_PIX_FMT_YUV410P:
> -sws_src_h = AV_CEIL_RSHIFT(sws_src_h, 2);
> -sws_src_w = AV_CEIL_RSHIFT(sws_src_w, 2);
> -sws_dst_h = AV_CEIL_RSHIFT(sws_dst_h, 2);
> -sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 2);
> -break;
> -case AV_PIX_FMT_YUV411P:
> -sws_src_w = AV_CEIL_RSHIFT(sws_src_w, 2);
> -sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 2);
> -break;
> -default:
> -av_log(context, AV_LOG_ERROR,
> -   "could not create SwsContext for scaling for given 
> input pixel format: %s\n",
> -   av_get_pix_fmt_name(inlink->format));
> -return AVERROR(EIO);
> -}
> +const AVPixFmtDescriptor *desc = 
> av_pix_fmt_desc_get(inlink->format);
> +sws_src_h = AV_CEIL_RSHIFT(sr_context->input.height, 
> desc->log2_chroma_h);
> +sws_src_w = AV_CEIL_RSHIFT(sr_context->input.width, 
> desc->log2_chroma_w);
> +sws_dst_h = AV_CEIL_RSHIFT(sr_context->output.height, 
> desc->log2_chroma_h);
> +sws_dst_w = AV_CEIL_RSHIFT(sr_context->output.width, 
> desc->log2_chroma_w);
> +
>  sr_context->sws_contexts[0] = sws_getContext(sws_src_w, 
> sws_src_h, AV_PIX_FMT_GRAY8,
>   sws_dst_w, 
> sws_dst_h, AV_PIX_FMT_GRAY8,
>   SWS_BICUBIC, NULL, 
> NULL, NULL);
> --
> 2.7.4
>
LGTM

> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Carl Eugen Hoyos



> Am 24.02.2020 um 15:54 schrieb Anton Khirnov :
> 
> Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
>>> Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov 
>>> :
>>> 
>>> It fundamentally depends on an API that has been deprecated for five
>>> years, has seen no commits since that time and is of highly dubious
>>> usefulness.
>> 
>> Please explain how the removed functionality was replaced.
> 
> It was not, for the reasons mentioned in the commit message. In my view,
> the fact that nobody fixed it in all that time proves that nobody cares
> about this functionality and thus that there is no value in keeping it.

In this case your patch set is not acceptable: I strongly suggest you work on 
something that improves FFmpeg instead of removing features.

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] Removed bogus/duplicate PNG parser subsystem entry.

2020-02-24 Thread Anamitra Ghorui
---
 libavcodec/Makefile | 1 -
 1 file changed, 1 deletion(-)

diff --git a/libavcodec/Makefile b/libavcodec/Makefile
index 0de585279c..f1c032b456 100644
--- a/libavcodec/Makefile
+++ b/libavcodec/Makefile
@@ -1059,7 +1059,6 @@ OBJS-$(CONFIG_MLP_PARSER)  += mlp_parse.o 
mlp_parser.o mlp.o
 OBJS-$(CONFIG_MPEG4VIDEO_PARSER)   += mpeg4video_parser.o h263.o \
   mpeg4videodec.o mpeg4video.o \
   ituh263dec.o h263dec.o h263data.o
-OBJS-$(CONFIG_PNG_PARSER)  += png_parser.o
 OBJS-$(CONFIG_MPEGAUDIO_PARSER)+= mpegaudio_parser.o
 OBJS-$(CONFIG_MPEGVIDEO_PARSER)+= mpegvideo_parser.o\
   mpeg12.o mpeg12data.o
-- 
2.17.1


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Anton Khirnov
Quoting Carl Eugen Hoyos (2020-02-24 13:50:57)
> Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov :
> >
> > It fundamentally depends on an API that has been deprecated for five
> > years, has seen no commits since that time and is of highly dubious
> > usefulness.
> 
> Please explain how the removed functionality was replaced.

It was not, for the reasons mentioned in the commit message. In my view,
the fact that nobody fixed it in all that time proves that nobody cares
about this functionality and thus that there is no value in keeping it.

Furthermore, I believe this filter (and all the associated
"postprocessing" ones) are anachronistic relics of the DivX era. They
were in fashion around ~2005 (though I doubt they were actually
improving anything even then) but nobody with a clue has used them since
H.264 took over. The value they bring to the project is actually
negative, since users who don't know any better might use them under
false impression that they would improve video quality. MPV removed
those filters eight years ago and AFAIK nobody ever missed them.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] GSoc Project: ABR for FFmpeg.

2020-02-24 Thread Steven Liu


> 2020年2月24日 下午10:09,tianchi huang  写道:
> 
> Hi,
> 
> I am interested in taking up a project with FFmpeg for GSOC’20. Specifically, 
> I found that the proposal called ‘ABR meets FFmpeg’ is extremely fit for me, 
> since:
> 
> i) I am proficient in c, c++, js, etc.  Especially, I really enjoy submitting 
> my work to and contributing to the open-source community. For instance, I 
> have submitted a work called ‘kurento-rtmp’, a module that can transcode the 
> WebRTC stream to the RTMP stream, to the Github. Till now, it has already 
> achieved 110 stars. (https://github.com/godka/kurento-rtmp). Meanwhile, I 
> have also contributed some work about ABR video streaming. For example, a 
> classical ABR method MPC (model predictive control) written in c++ 
> (https://github.com/thu-media/Comyco/blob/master/cpp-linux/mpc.cpp).
> 
> ii) I am in that area. Currently, I am a Ph.D. student in the Department of 
> Computer Science and Technology at Tsinghua University, advised by Prof. 
> Lifeng Sun.  My research work focuses on multimedia network streaming. In 
> recent years I have published several adaptive video streaming papers in the 
> *top conference*, including SIGCOMM, INFOCOM,  ACM Multimedia, etc. The full 
> publication list is shown in https://godka.github.io.
> 
> I have a plan for this work:
> Taking HLS (HTTP Live Streaming) as an example, I need to change most of the 
> code in `hls.c'. The previous algorithm fails to support ABR algorithms since 
> it directly implements adding_stream_to_programs after reading the playlist. 
> At the same time, the live streaming part is also relatively simple, setting 
> the clock directly as an infinite loop. To that end, the key idea is to 
> rewrite this part and add the concept of buffer occupancy, which is an 
> essential part of ABR algorithms. 
> Moreover, I also attempt to create a new module ‘abr.c’, which can support 
> various of ABR algorithms, such as MPC, HYB, BBA, and especially Pensieve and 
> Comyco (such state-of-the-art methods leverage *Neural Network* to make the 
> decision).
> 
> In general, can you please provide a chance to me for attending GSOC’20?
Sure,  welcome contribute to ffmpeg.
Reference the documents:
http://ffmpeg.org/documentation.html

Thanks

Steven

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] GSoc Project: ABR for FFmpeg.

2020-02-24 Thread tianchi huang
Hi,

I am interested in taking up a project with FFmpeg for GSOC’20. Specifically, I 
found that the proposal called ‘ABR meets FFmpeg’ is extremely fit for me, 
since:

i) I am proficient in c, c++, js, etc.  Especially, I really enjoy submitting 
my work to and contributing to the open-source community. For instance, I have 
submitted a work called ‘kurento-rtmp’, a module that can transcode the WebRTC 
stream to the RTMP stream, to the Github. Till now, it has already achieved 110 
stars. (https://github.com/godka/kurento-rtmp). Meanwhile, I have also 
contributed some work about ABR video streaming. For example, a classical ABR 
method MPC (model predictive control) written in c++ 
(https://github.com/thu-media/Comyco/blob/master/cpp-linux/mpc.cpp).

ii) I am in that area. Currently, I am a Ph.D. student in the Department of 
Computer Science and Technology at Tsinghua University, advised by Prof. Lifeng 
Sun.  My research work focuses on multimedia network streaming. In recent years 
I have published several adaptive video streaming papers in the *top 
conference*, including SIGCOMM, INFOCOM,  ACM Multimedia, etc. The full 
publication list is shown in https://godka.github.io.

I have a plan for this work:
Taking HLS (HTTP Live Streaming) as an example, I need to change most of the 
code in `hls.c'. The previous algorithm fails to support ABR algorithms since 
it directly implements adding_stream_to_programs after reading the playlist. At 
the same time, the live streaming part is also relatively simple, setting the 
clock directly as an infinite loop. To that end, the key idea is to rewrite 
this part and add the concept of buffer occupancy, which is an essential part 
of ABR algorithms. 
Moreover, I also attempt to create a new module ‘abr.c’, which can support 
various of ABR algorithms, such as MPC, HYB, BBA, and especially Pensieve and 
Comyco (such state-of-the-art methods leverage *Neural Network* to make the 
decision).

In general, can you please provide a chance to me for attending GSOC’20?

Best,
Tianchi Huang.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] GSoC 2020

2020-02-24 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of
> Thilo Borgmann
> Sent: Saturday, February 22, 2020 5:50 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] GSoC 2020
> 
> Hi,
> 
> > please help fill the 2020 GSoC Ideas page
> > https://trac.ffmpeg.org/wiki/SponsoringPrograms/GSoC/2020
> >
> > (This page is key to being acccepted to GSoC)
> 
> I guess everybody already noticed that FFmpeg had been accepted as a
> mentoring Org in GSoC 2020! :D

Hi,

Just to save the chance for FFmpeg, I proposed one idea under
'Intel® Video and Audio for Linux' to tune performance of native layer conv2d,
see https://01.org/linuxmedia/gsoc/gsoc-2020-ideas for detail.

I'll continue if no one objects. 

> 
> Thanks to all potential mentors!
> 
> -Thilo
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 04/13] avformat/mux: Cosmetics

2020-02-24 Thread Andreas Rheinhardt
Andreas Rheinhardt:
> Signed-off-by: Andreas Rheinhardt 
> ---
>  libavformat/mux.c | 19 +--
>  1 file changed, 9 insertions(+), 10 deletions(-)
> 
> diff --git a/libavformat/mux.c b/libavformat/mux.c
> index 2728c62de5..4089382ffd 100644
> --- a/libavformat/mux.c
> +++ b/libavformat/mux.c
> @@ -921,13 +921,13 @@ int ff_interleave_add_packet(AVFormatContext *s, 
> AVPacket *pkt,
>  {
>  int ret;
>  AVPacketList **next_point, *this_pktl;
> -AVStream *st   = s->streams[pkt->stream_index];
> -int chunked= s->max_chunk_size || s->max_chunk_duration;
> +AVStream *st = s->streams[pkt->stream_index];
> +int chunked  = s->max_chunk_size || s->max_chunk_duration;
>  
> -this_pktl  = av_malloc(sizeof(AVPacketList));
> +this_pktl= av_malloc(sizeof(AVPacketList));
>  if (!this_pktl)
>  return AVERROR(ENOMEM);
> -if ((pkt->flags & AV_PKT_FLAG_UNCODED_FRAME)) {
> +if (pkt->flags & AV_PKT_FLAG_UNCODED_FRAME) {
>  av_assert0(pkt->size == UNCODED_FRAME_PACKET_SIZE);
>  av_assert0(((AVFrame *)pkt->data)->buf);
>  } else {
> @@ -940,7 +940,7 @@ int ff_interleave_add_packet(AVFormatContext *s, AVPacket 
> *pkt,
>  av_packet_move_ref(&this_pktl->pkt, pkt);
>  pkt = &this_pktl->pkt;
>  
> -if (s->streams[pkt->stream_index]->last_in_packet_buffer) {
> +if (st->last_in_packet_buffer) {
>  next_point = &(st->last_in_packet_buffer->next);
>  } else {
>  next_point = &s->internal->packet_buffer;
> @@ -952,8 +952,8 @@ int ff_interleave_add_packet(AVFormatContext *s, AVPacket 
> *pkt,
>  st->interleaver_chunk_duration += pkt->duration;
>  if (   (s->max_chunk_size && st->interleaver_chunk_size > 
> s->max_chunk_size)
>  || (max && st->interleaver_chunk_duration   > max)) {
> -st->interleaver_chunk_size  = 0;
> -this_pktl->pkt.flags |= CHUNK_START;
> +st->interleaver_chunk_size = 0;
> +pkt->flags |= CHUNK_START;
>  if (max && st->interleaver_chunk_duration > max) {
>  int64_t syncoffset = (st->codecpar->codec_type == 
> AVMEDIA_TYPE_VIDEO)*max/2;
>  int64_t syncto = av_rescale(pkt->dts + syncoffset, 1, 
> max)*max - syncoffset;
> @@ -964,7 +964,7 @@ int ff_interleave_add_packet(AVFormatContext *s, AVPacket 
> *pkt,
>  }
>  }
>  if (*next_point) {
> -if (chunked && !(this_pktl->pkt.flags & CHUNK_START))
> +if (chunked && !(pkt->flags & CHUNK_START))
>  goto next_non_null;
>  
>  if (compare(s, &s->internal->packet_buffer_end->pkt, pkt)) {
> @@ -985,8 +985,7 @@ next_non_null:
>  
>  this_pktl->next = *next_point;
>  
> -s->streams[pkt->stream_index]->last_in_packet_buffer =
> -*next_point  = this_pktl;
> +st->last_in_packet_buffer = *next_point = this_pktl;
>  
>  return 0;
>  }
> 
Ping. (The next patch [1] in this series will be needed for the next
major bump.)

- Andreas

[1]:
https://patchwork.ffmpeg.org/project/ffmpeg/patch/20190813024726.6596-5-andreas.rheinha...@gmail.com/
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 02/12] fifo: hide the definition of AVFifoBuffer in next+1 major bump

2020-02-24 Thread Andreas Rheinhardt
Anton Khirnov:
> There is no reason whatsoever for it to be public.
> ---

The flac-parser (and maybe other code) uses it directly.

- Andreas
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Major bump preparations - lavu

2020-02-24 Thread Carl Eugen Hoyos
Am Mo., 24. Feb. 2020 um 13:39 Uhr schrieb Anton Khirnov :

> we have discussed previously that we want to do a major bump soon.
> This set starts with some preparations for that. After it, lavu should be
> ready for the bump.
>
> Please comment

I don't think the set is acceptable.

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Paul B Mahol
Filter should not be removed, it should use qp via frame side data.

On 2/24/20, Anton Khirnov  wrote:
> It fundamentally depends on an API that has been deprecated for five
> years, has seen no commits since that time and is of highly dubious
> usefulness.
> ---
>  doc/filters.texi|  32 ---
>  libavfilter/Makefile|   1 -
>  libavfilter/allfilters.c|   1 -
>  libavfilter/vf_qp.c | 183 
>  tests/fate/filter-video.mak |   7 +-
>  tests/ref/fate/filter-pp2   |   1 -
>  tests/ref/fate/filter-pp3   |   1 -
>  7 files changed, 1 insertion(+), 225 deletions(-)
>  delete mode 100644 libavfilter/vf_qp.c
>  delete mode 100644 tests/ref/fate/filter-pp2
>  delete mode 100644 tests/ref/fate/filter-pp3
>
> diff --git a/doc/filters.texi b/doc/filters.texi
> index 70fd7a4cc7..2a1235183f 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -15335,38 +15335,6 @@ telecine NTSC input:
>  ffmpeg -i input -vf pullup -r 24000/1001 ...
>  @end example
>
> -@section qp
> -
> -Change video quantization parameters (QP).
> -
> -The filter accepts the following option:
> -
> -@table @option
> -@item qp
> -Set expression for quantization parameter.
> -@end table
> -
> -The expression is evaluated through the eval API and can contain, among
> others,
> -the following constants:
> -
> -@table @var
> -@item known
> -1 if index is not 129, 0 otherwise.
> -
> -@item qp
> -Sequential index starting from -129 to 128.
> -@end table
> -
> -@subsection Examples
> -
> -@itemize
> -@item
> -Some equation like:
> -@example
> -qp=2+2*sin(PI*qp)
> -@end example
> -@end itemize
> -
>  @section random
>
>  Flush video frames from internal cache of frames into a random order.
> diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> index 089880a39d..74968b32e1 100644
> --- a/libavfilter/Makefile
> +++ b/libavfilter/Makefile
> @@ -349,7 +349,6 @@ OBJS-$(CONFIG_PROGRAM_OPENCL_FILTER) +=
> vf_program_opencl.o opencl.o fra
>  OBJS-$(CONFIG_PSEUDOCOLOR_FILTER)+= vf_pseudocolor.o
>  OBJS-$(CONFIG_PSNR_FILTER)   += vf_psnr.o framesync.o
>  OBJS-$(CONFIG_PULLUP_FILTER) += vf_pullup.o
> -OBJS-$(CONFIG_QP_FILTER) += vf_qp.o
>  OBJS-$(CONFIG_RANDOM_FILTER) += vf_random.o
>  OBJS-$(CONFIG_READEIA608_FILTER) += vf_readeia608.o
>  OBJS-$(CONFIG_READVITC_FILTER)   += vf_readvitc.o
> diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> index 88ebd121ad..aa6f006ddb 100644
> --- a/libavfilter/allfilters.c
> +++ b/libavfilter/allfilters.c
> @@ -332,7 +332,6 @@ extern AVFilter ff_vf_program_opencl;
>  extern AVFilter ff_vf_pseudocolor;
>  extern AVFilter ff_vf_psnr;
>  extern AVFilter ff_vf_pullup;
> -extern AVFilter ff_vf_qp;
>  extern AVFilter ff_vf_random;
>  extern AVFilter ff_vf_readeia608;
>  extern AVFilter ff_vf_readvitc;
> diff --git a/libavfilter/vf_qp.c b/libavfilter/vf_qp.c
> deleted file mode 100644
> index 33d39493bc..00
> --- a/libavfilter/vf_qp.c
> +++ /dev/null
> @@ -1,183 +0,0 @@
> -/*
> - * Copyright (C) 2004 Michael Niedermayer 
> - *
> - * This file is part of FFmpeg.
> - *
> - * FFmpeg is free software; you can redistribute it and/or
> - * modify it under the terms of the GNU Lesser General Public
> - * License as published by the Free Software Foundation; either
> - * version 2.1 of the License, or (at your option) any later version.
> - *
> - * FFmpeg is distributed in the hope that it will be useful,
> - * but WITHOUT ANY WARRANTY; without even the implied warranty of
> - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> - * Lesser General Public License for more details.
> - *
> - * You should have received a copy of the GNU Lesser General Public
> - * License along with FFmpeg; if not, write to the Free Software
> - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
> USA
> - */
> -
> -#include 
> -#include "libavutil/eval.h"
> -#include "libavutil/imgutils.h"
> -#include "libavutil/pixdesc.h"
> -#include "libavutil/opt.h"
> -#include "avfilter.h"
> -#include "formats.h"
> -#include "internal.h"
> -#include "video.h"
> -
> -typedef struct QPContext {
> -const AVClass *class;
> -char *qp_expr_str;
> -int8_t lut[257];
> -int h, qstride;
> -int evaluate_per_mb;
> -} QPContext;
> -
> -#define OFFSET(x) offsetof(QPContext, x)
> -#define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM
> -
> -static const AVOption qp_options[] = {
> -{ "qp", "set qp expression", OFFSET(qp_expr_str), AV_OPT_TYPE_STRING,
> {.str=NULL}, 0, 0, FLAGS },
> -{ NULL }
> -};
> -
> -AVFILTER_DEFINE_CLASS(qp);
> -
> -static int config_input(AVFilterLink *inlink)
> -{
> -AVFilterContext *ctx = inlink->dst;
> -QPContext *s = ctx->priv;
> -int i;
> -int ret;
> -AVExpr *e = NULL;
> -static const char *var_names[] = { "known", "qp", "x", "y", "w", "h",
> NULL };
> -

Re: [FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Carl Eugen Hoyos
Am Mo., 24. Feb. 2020 um 13:40 Uhr schrieb Anton Khirnov :
>
> It fundamentally depends on an API that has been deprecated for five
> years, has seen no commits since that time and is of highly dubious
> usefulness.

Please explain how the removed functionality was replaced.

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 12/12] Add missing stddef.h includes for size_t.

2020-02-24 Thread Anton Khirnov
---
 libavutil/hash.c| 2 ++
 libavutil/hash.h| 1 +
 libavutil/murmur3.c | 2 ++
 libavutil/murmur3.h | 1 +
 libavutil/ripemd.c  | 1 +
 libavutil/ripemd.h  | 1 +
 6 files changed, 8 insertions(+)

diff --git a/libavutil/hash.c b/libavutil/hash.c
index 75edb6db78..d626c31181 100644
--- a/libavutil/hash.c
+++ b/libavutil/hash.c
@@ -17,6 +17,8 @@
  * License along with FFmpeg; if not, write to the Free Software
  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
  */
+
+#include 
 #include 
 #include "hash.h"
 
diff --git a/libavutil/hash.h b/libavutil/hash.h
index 7693e6bf0d..af4719e423 100644
--- a/libavutil/hash.h
+++ b/libavutil/hash.h
@@ -27,6 +27,7 @@
 #ifndef AVUTIL_HASH_H
 #define AVUTIL_HASH_H
 
+#include 
 #include 
 
 #include "version.h"
diff --git a/libavutil/murmur3.c b/libavutil/murmur3.c
index 7961752515..3e85c3c94f 100644
--- a/libavutil/murmur3.c
+++ b/libavutil/murmur3.c
@@ -17,6 +17,8 @@
  * License along with FFmpeg; if not, write to the Free Software
  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
  */
+
+#include 
 #include 
 #include "mem.h"
 #include "intreadwrite.h"
diff --git a/libavutil/murmur3.h b/libavutil/murmur3.h
index 1b09175c1e..b3b3a07de2 100644
--- a/libavutil/murmur3.h
+++ b/libavutil/murmur3.h
@@ -27,6 +27,7 @@
 #ifndef AVUTIL_MURMUR3_H
 #define AVUTIL_MURMUR3_H
 
+#include 
 #include 
 
 #include "version.h"
diff --git a/libavutil/ripemd.c b/libavutil/ripemd.c
index 4f1c4ea899..89d69cc23d 100644
--- a/libavutil/ripemd.c
+++ b/libavutil/ripemd.c
@@ -19,6 +19,7 @@
  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
  */
 
+#include 
 #include 
 
 #include "attributes.h"
diff --git a/libavutil/ripemd.h b/libavutil/ripemd.h
index 0db6858ff3..921aa66684 100644
--- a/libavutil/ripemd.h
+++ b/libavutil/ripemd.h
@@ -28,6 +28,7 @@
 #ifndef AVUTIL_RIPEMD_H
 #define AVUTIL_RIPEMD_H
 
+#include 
 #include 
 
 #include "attributes.h"
-- 
2.24.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 01/12] fifo: uninline av_fifo_peek2() on the next major bump

2020-02-24 Thread Anton Khirnov
Inline public functions should be avoided unless absolutely necessary,
and no such necessity exists in this code.
---
 libavutil/fifo.c | 13 +
 libavutil/fifo.h |  5 +
 2 files changed, 18 insertions(+)

diff --git a/libavutil/fifo.c b/libavutil/fifo.c
index 1060aedf13..0baaadc521 100644
--- a/libavutil/fifo.c
+++ b/libavutil/fifo.c
@@ -23,6 +23,7 @@
 #include "avassert.h"
 #include "common.h"
 #include "fifo.h"
+#include "version.h"
 
 static AVFifoBuffer *fifo_alloc_common(void *buffer, size_t size)
 {
@@ -238,3 +239,15 @@ void av_fifo_drain(AVFifoBuffer *f, int size)
 f->rptr -= f->end - f->buffer;
 f->rndx += size;
 }
+
+#if LIBAVUTIL_VERSION_MAJOR >= 57
+uint8_t *av_fifo_peek2(const AVFifoBuffer *f, int offs);
+{
+uint8_t *ptr = f->rptr + offs;
+if (ptr >= f->end)
+ptr = f->buffer + (ptr - f->end);
+else if (ptr < f->buffer)
+ptr = f->end - (f->buffer - ptr);
+return ptr;
+}
+#endif
diff --git a/libavutil/fifo.h b/libavutil/fifo.h
index dc7bc6f0dd..8cd964ef45 100644
--- a/libavutil/fifo.h
+++ b/libavutil/fifo.h
@@ -27,6 +27,7 @@
 #include 
 #include "avutil.h"
 #include "attributes.h"
+#include "version.h"
 
 typedef struct AVFifoBuffer {
 uint8_t *buffer;
@@ -166,6 +167,7 @@ void av_fifo_drain(AVFifoBuffer *f, int size);
  * point outside to the buffer data.
  * The used buffer size can be checked with av_fifo_size().
  */
+#if LIBAVUTIL_VERSION_MAJOR < 57
 static inline uint8_t *av_fifo_peek2(const AVFifoBuffer *f, int offs)
 {
 uint8_t *ptr = f->rptr + offs;
@@ -175,5 +177,8 @@ static inline uint8_t *av_fifo_peek2(const AVFifoBuffer *f, 
int offs)
 ptr = f->end - (f->buffer - ptr);
 return ptr;
 }
+#else
+uint8_t *av_fifo_peek2(const AVFifoBuffer *f, int offs);
+#endif
 
 #endif /* AVUTIL_FIFO_H */
-- 
2.24.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 11/12] mpegvideo: stop exporting QP tables

2020-02-24 Thread Anton Khirnov
This API has been deprecated for five years and is of highly dubious
usefulness.
---
 libavcodec/h263dec.c   |  2 --
 libavcodec/mpeg12dec.c |  2 --
 libavcodec/mpegvideo.c | 12 
 libavcodec/mpegvideo.h |  2 --
 libavcodec/rv10.c  |  2 --
 libavcodec/rv34.c  |  2 --
 6 files changed, 22 deletions(-)

diff --git a/libavcodec/h263dec.c b/libavcodec/h263dec.c
index 8ee844e298..9cad65e56b 100644
--- a/libavcodec/h263dec.c
+++ b/libavcodec/h263dec.c
@@ -693,12 +693,10 @@ frame_end:
 if ((ret = av_frame_ref(pict, s->current_picture_ptr->f)) < 0)
 return ret;
 ff_print_debug_info(s, s->current_picture_ptr, pict);
-ff_mpv_export_qp_table(s, pict, s->current_picture_ptr, 
FF_QSCALE_TYPE_MPEG1);
 } else if (s->last_picture_ptr) {
 if ((ret = av_frame_ref(pict, s->last_picture_ptr->f)) < 0)
 return ret;
 ff_print_debug_info(s, s->last_picture_ptr, pict);
-ff_mpv_export_qp_table(s, pict, s->last_picture_ptr, 
FF_QSCALE_TYPE_MPEG1);
 }
 
 if (s->last_picture_ptr || s->low_delay) {
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 17f9495a1d..124f86e459 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -2062,7 +2062,6 @@ static int slice_end(AVCodecContext *avctx, AVFrame *pict)
 if (ret < 0)
 return ret;
 ff_print_debug_info(s, s->current_picture_ptr, pict);
-ff_mpv_export_qp_table(s, pict, s->current_picture_ptr, 
FF_QSCALE_TYPE_MPEG2);
 } else {
 if (avctx->active_thread_type & FF_THREAD_FRAME)
 s->picture_number++;
@@ -2073,7 +2072,6 @@ static int slice_end(AVCodecContext *avctx, AVFrame *pict)
 if (ret < 0)
 return ret;
 ff_print_debug_info(s, s->last_picture_ptr, pict);
-ff_mpv_export_qp_table(s, pict, s->last_picture_ptr, 
FF_QSCALE_TYPE_MPEG2);
 }
 }
 
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index dbb6ab9b39..55ca99f7b2 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -1443,18 +1443,6 @@ void ff_print_debug_info(MpegEncContext *s, Picture *p, 
AVFrame *pict)
  s->mb_width, s->mb_height, s->mb_stride, 
s->quarter_sample);
 }
 
-int ff_mpv_export_qp_table(MpegEncContext *s, AVFrame *f, Picture *p, int 
qp_type)
-{
-AVBufferRef *ref = av_buffer_ref(p->qscale_table_buf);
-int offset = 2*s->mb_stride + 1;
-if(!ref)
-return AVERROR(ENOMEM);
-av_assert0(ref->size >= offset + s->mb_stride * ((f->height+15)/16));
-ref->size -= offset;
-ref->data += offset;
-return av_frame_set_qp_table(f, ref, s->mb_stride, qp_type);
-}
-
 static inline int hpel_motion_lowres(MpegEncContext *s,
  uint8_t *dest, uint8_t *src,
  int field_based, int field_select,
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index 29e692f245..06d27b32f2 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -713,8 +713,6 @@ void ff_mpeg_flush(AVCodecContext *avctx);
 
 void ff_print_debug_info(MpegEncContext *s, Picture *p, AVFrame *pict);
 
-int ff_mpv_export_qp_table(MpegEncContext *s, AVFrame *f, Picture *p, int 
qp_type);
-
 void ff_write_quant_matrix(PutBitContext *pb, uint16_t *matrix);
 
 int ff_update_duplicate_context(MpegEncContext *dst, MpegEncContext *src);
diff --git a/libavcodec/rv10.c b/libavcodec/rv10.c
index 3b41d30b92..d958d3f8d1 100644
--- a/libavcodec/rv10.c
+++ b/libavcodec/rv10.c
@@ -772,12 +772,10 @@ static int rv10_decode_frame(AVCodecContext *avctx, void 
*data, int *got_frame,
 if ((ret = av_frame_ref(pict, s->current_picture_ptr->f)) < 0)
 return ret;
 ff_print_debug_info(s, s->current_picture_ptr, pict);
-ff_mpv_export_qp_table(s, pict, s->current_picture_ptr, 
FF_QSCALE_TYPE_MPEG1);
 } else if (s->last_picture_ptr) {
 if ((ret = av_frame_ref(pict, s->last_picture_ptr->f)) < 0)
 return ret;
 ff_print_debug_info(s, s->last_picture_ptr, pict);
-ff_mpv_export_qp_table(s, pict,s->last_picture_ptr, 
FF_QSCALE_TYPE_MPEG1);
 }
 
 if (s->last_picture_ptr || s->low_delay) {
diff --git a/libavcodec/rv34.c b/libavcodec/rv34.c
index d171e6e1bd..57557f537a 100644
--- a/libavcodec/rv34.c
+++ b/libavcodec/rv34.c
@@ -1617,13 +1617,11 @@ static int finish_frame(AVCodecContext *avctx, AVFrame 
*pict)
 if ((ret = av_frame_ref(pict, s->current_picture_ptr->f)) < 0)
 return ret;
 ff_print_debug_info(s, s->current_picture_ptr, pict);
-ff_mpv_export_qp_table(s, pict, s->current_picture_ptr, 
FF_QSCALE_TYPE_MPEG1);
 got_picture = 1;
 } else if (s->last_picture_ptr) {
 if ((ret = av_frame_ref(pict, s->last_picture_ptr->f)) < 0)
 return ret;
 ff_pri

[FFmpeg-devel] [PATCH 10/12] mjpegdec: stop exporting QP tables

2020-02-24 Thread Anton Khirnov
This API has been deprecated for five years and is of highly dubious
usefulness.
---
 libavcodec/mjpegdec.c | 12 ++--
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/libavcodec/mjpegdec.c b/libavcodec/mjpegdec.c
index d5e7c21610..c535fd0fff 100644
--- a/libavcodec/mjpegdec.c
+++ b/libavcodec/mjpegdec.c
@@ -2508,19 +2508,11 @@ eoi_parser:
 *got_frame = 1;
 s->got_picture = 0;
 
-if (!s->lossless) {
+if (!s->lossless && (avctx->debug & FF_DEBUG_QP)) {
 int qp = FFMAX3(s->qscale[0],
 s->qscale[1],
 s->qscale[2]);
-int qpw = (s->width + 15) / 16;
-AVBufferRef *qp_table_buf = av_buffer_alloc(qpw);
-if (qp_table_buf) {
-memset(qp_table_buf->data, qp, qpw);
-av_frame_set_qp_table(data, qp_table_buf, 0, 
FF_QSCALE_TYPE_MPEG1);
-}
-
-if(avctx->debug & FF_DEBUG_QP)
-av_log(avctx, AV_LOG_DEBUG, "QP: %d\n", qp);
+av_log(avctx, AV_LOG_DEBUG, "QP: %d\n", qp);
 }
 
 goto the_end;
-- 
2.24.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 03/12] lavfi: drop vf_qp

2020-02-24 Thread Anton Khirnov
It fundamentally depends on an API that has been deprecated for five
years, has seen no commits since that time and is of highly dubious
usefulness.
---
 doc/filters.texi|  32 ---
 libavfilter/Makefile|   1 -
 libavfilter/allfilters.c|   1 -
 libavfilter/vf_qp.c | 183 
 tests/fate/filter-video.mak |   7 +-
 tests/ref/fate/filter-pp2   |   1 -
 tests/ref/fate/filter-pp3   |   1 -
 7 files changed, 1 insertion(+), 225 deletions(-)
 delete mode 100644 libavfilter/vf_qp.c
 delete mode 100644 tests/ref/fate/filter-pp2
 delete mode 100644 tests/ref/fate/filter-pp3

diff --git a/doc/filters.texi b/doc/filters.texi
index 70fd7a4cc7..2a1235183f 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -15335,38 +15335,6 @@ telecine NTSC input:
 ffmpeg -i input -vf pullup -r 24000/1001 ...
 @end example
 
-@section qp
-
-Change video quantization parameters (QP).
-
-The filter accepts the following option:
-
-@table @option
-@item qp
-Set expression for quantization parameter.
-@end table
-
-The expression is evaluated through the eval API and can contain, among others,
-the following constants:
-
-@table @var
-@item known
-1 if index is not 129, 0 otherwise.
-
-@item qp
-Sequential index starting from -129 to 128.
-@end table
-
-@subsection Examples
-
-@itemize
-@item
-Some equation like:
-@example
-qp=2+2*sin(PI*qp)
-@end example
-@end itemize
-
 @section random
 
 Flush video frames from internal cache of frames into a random order.
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 089880a39d..74968b32e1 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -349,7 +349,6 @@ OBJS-$(CONFIG_PROGRAM_OPENCL_FILTER) += 
vf_program_opencl.o opencl.o fra
 OBJS-$(CONFIG_PSEUDOCOLOR_FILTER)+= vf_pseudocolor.o
 OBJS-$(CONFIG_PSNR_FILTER)   += vf_psnr.o framesync.o
 OBJS-$(CONFIG_PULLUP_FILTER) += vf_pullup.o
-OBJS-$(CONFIG_QP_FILTER) += vf_qp.o
 OBJS-$(CONFIG_RANDOM_FILTER) += vf_random.o
 OBJS-$(CONFIG_READEIA608_FILTER) += vf_readeia608.o
 OBJS-$(CONFIG_READVITC_FILTER)   += vf_readvitc.o
diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
index 88ebd121ad..aa6f006ddb 100644
--- a/libavfilter/allfilters.c
+++ b/libavfilter/allfilters.c
@@ -332,7 +332,6 @@ extern AVFilter ff_vf_program_opencl;
 extern AVFilter ff_vf_pseudocolor;
 extern AVFilter ff_vf_psnr;
 extern AVFilter ff_vf_pullup;
-extern AVFilter ff_vf_qp;
 extern AVFilter ff_vf_random;
 extern AVFilter ff_vf_readeia608;
 extern AVFilter ff_vf_readvitc;
diff --git a/libavfilter/vf_qp.c b/libavfilter/vf_qp.c
deleted file mode 100644
index 33d39493bc..00
--- a/libavfilter/vf_qp.c
+++ /dev/null
@@ -1,183 +0,0 @@
-/*
- * Copyright (C) 2004 Michael Niedermayer 
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include 
-#include "libavutil/eval.h"
-#include "libavutil/imgutils.h"
-#include "libavutil/pixdesc.h"
-#include "libavutil/opt.h"
-#include "avfilter.h"
-#include "formats.h"
-#include "internal.h"
-#include "video.h"
-
-typedef struct QPContext {
-const AVClass *class;
-char *qp_expr_str;
-int8_t lut[257];
-int h, qstride;
-int evaluate_per_mb;
-} QPContext;
-
-#define OFFSET(x) offsetof(QPContext, x)
-#define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM
-
-static const AVOption qp_options[] = {
-{ "qp", "set qp expression", OFFSET(qp_expr_str), AV_OPT_TYPE_STRING, 
{.str=NULL}, 0, 0, FLAGS },
-{ NULL }
-};
-
-AVFILTER_DEFINE_CLASS(qp);
-
-static int config_input(AVFilterLink *inlink)
-{
-AVFilterContext *ctx = inlink->dst;
-QPContext *s = ctx->priv;
-int i;
-int ret;
-AVExpr *e = NULL;
-static const char *var_names[] = { "known", "qp", "x", "y", "w", "h", NULL 
};
-
-if (!s->qp_expr_str)
-return 0;
-
-ret = av_expr_parse(&e, s->qp_expr_str, var_names, NULL, NULL, NULL, NULL, 
0, ctx);
-if (ret < 0)
-return ret;
-
-s->h   = (inlink->h + 15) >> 4;
-s->qstride = (inlink->w + 15) >> 4;
-for (i = -129; i < 128; i++) {
-double var_values[] = { i != -129, i, NAN, NAN, s->qstride, s->h, 0};
-double temp_v

[FFmpeg-devel] [PATCH 08/12] vf_spp: drop the option to use frame-attached QP tables

2020-02-24 Thread Anton Khirnov
This API has been deprecated for five years.
---
 doc/filters.texi |   7 +--
 libavfilter/vf_spp.c | 100 +++
 libavfilter/vf_spp.h |   3 --
 3 files changed, 26 insertions(+), 84 deletions(-)

diff --git a/doc/filters.texi b/doc/filters.texi
index 3b1470ed0f..5fa1663426 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -17221,8 +17221,7 @@ that value the speed drops by a factor of approximately 
2.  Default value is
 @code{3}.
 
 @item qp
-Force a constant quantization parameter. If not set, the filter will use the QP
-from the video stream (if available).
+Force a constant quantization parameter.
 
 @item mode
 Set thresholding mode. Available modes are:
@@ -17234,10 +17233,6 @@ Set hard thresholding (default).
 Set soft thresholding (better de-ringing effect, but likely blurrier).
 @end table
 
-@item use_bframe_qp
-Enable the use of the QP from the B-Frames if set to @code{1}. Using this
-option may cause flicker since the B-Frames have often larger QP. Default is
-@code{0} (not enabled).
 @end table
 
 @subsection Commands
diff --git a/libavfilter/vf_spp.c b/libavfilter/vf_spp.c
index 7381938f7f..ba6138f08e 100644
--- a/libavfilter/vf_spp.c
+++ b/libavfilter/vf_spp.c
@@ -64,7 +64,6 @@ static const AVOption spp_options[] = {
 { "mode", "set thresholding mode", OFFSET(mode), AV_OPT_TYPE_INT, {.i64 = 
MODE_HARD}, 0, NB_MODES - 1, FLAGS, "mode" },
 { "hard", "hard thresholding", 0, AV_OPT_TYPE_CONST, {.i64 = 
MODE_HARD}, INT_MIN, INT_MAX, FLAGS, "mode" },
 { "soft", "soft thresholding", 0, AV_OPT_TYPE_CONST, {.i64 = 
MODE_SOFT}, INT_MIN, INT_MAX, FLAGS, "mode" },
-{ "use_bframe_qp", "use B-frames' QP", OFFSET(use_bframe_qp), 
AV_OPT_TYPE_BOOL, {.i64 = 0}, 0, 1, FLAGS },
 { NULL }
 };
 
@@ -232,7 +231,7 @@ static inline void add_block(uint16_t *dst, int linesize, 
const int16_t block[64
 
 static void filter(SPPContext *p, uint8_t *dst, uint8_t *src,
int dst_linesize, int src_linesize, int width, int height,
-   const uint8_t *qp_table, int qp_stride, int is_luma, int 
depth)
+   int is_luma, int depth)
 {
 int x, y, i;
 const int count = 1 << p->log2_count;
@@ -266,15 +265,8 @@ static void filter(SPPContext *p, uint8_t *dst, uint8_t 
*src,
 for (y = 0; y < height + 8; y += 8) {
 memset(p->temp + (8 + y) * linesize, 0, 8 * linesize * 
sizeof(*p->temp));
 for (x = 0; x < width + 8; x += 8) {
-int qp;
-
-if (p->qp) {
-qp = p->qp;
-} else{
-const int qps = 3 + is_luma;
-qp = qp_table[(FFMIN(x, width - 1) >> qps) + (FFMIN(y, height 
- 1) >> qps) * qp_stride];
-qp = FFMAX(1, ff_norm_qscale(qp, p->qscale_type));
-}
+int qp = p->qp;
+
 for (i = 0; i < count; i++) {
 const int x1 = x + offset[i + count - 1][0];
 const int y1 = y + offset[i + count - 1][1];
@@ -357,77 +349,36 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 SPPContext *s = ctx->priv;
 AVFilterLink *outlink = ctx->outputs[0];
 AVFrame *out = in;
-int qp_stride = 0;
-const int8_t *qp_table = NULL;
 const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
 const int depth = desc->comp[0].depth;
 
-/* if we are not in a constant user quantizer mode and we don't want to use
- * the quantizers from the B-frames (B-frames often have a higher QP), we
- * need to save the qp table from the last non B-frame; this is what the
- * following code block does */
-if (!s->qp) {
-qp_table = av_frame_get_qp_table(in, &qp_stride, &s->qscale_type);
-
-if (qp_table && !s->use_bframe_qp && in->pict_type != 
AV_PICTURE_TYPE_B) {
-int w, h;
-
-/* if the qp stride is not set, it means the QP are only defined on
- * a line basis */
-if (!qp_stride) {
-w = AV_CEIL_RSHIFT(inlink->w, 4);
-h = 1;
-} else {
-w = qp_stride;
-h = AV_CEIL_RSHIFT(inlink->h, 4);
-}
-
-if (w * h > s->non_b_qp_alloc_size) {
-int ret = av_reallocp_array(&s->non_b_qp_table, w, h);
-if (ret < 0) {
-s->non_b_qp_alloc_size = 0;
-return ret;
-}
-s->non_b_qp_alloc_size = w * h;
-}
-
-av_assert0(w * h <= s->non_b_qp_alloc_size);
-memcpy(s->non_b_qp_table, qp_table, w * h);
-}
-}
-
 if (s->log2_count && !ctx->is_disabled) {
-if (!s->use_bframe_qp && s->non_b_qp_table)
-qp_table = s->non_b_qp_table;
-
-if (qp_table || s->qp) {
-const int cw = AV_CEIL_RSHIFT(inlink->w, s->hsub);
-const int ch = AV_CEIL_RSHIFT(inlink->h, s->vsub);
-
-/* get a ne

[FFmpeg-devel] [PATCH 05/12] vf_fspp: drop the option to use frame-attached QP tables

2020-02-24 Thread Anton Khirnov
This API has been deprecated for five years.
---
 doc/filters.texi  |   6 --
 libavfilter/vf_fspp.c | 129 ++
 libavfilter/vf_fspp.h |   3 -
 3 files changed, 30 insertions(+), 108 deletions(-)

diff --git a/doc/filters.texi b/doc/filters.texi
index 43e52f930a..59571a7022 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -11382,18 +11382,12 @@ an integer in the range 4-5. Default value is 
@code{4}.
 
 @item qp
 Force a constant quantization parameter. It accepts an integer in range 0-63.
-If not set, the filter will use the QP from the video stream (if available).
 
 @item strength
 Set filter strength. It accepts an integer in range -15 to 32. Lower values 
mean
 more details but also more artifacts, while higher values make the image 
smoother
 but also blurrier. Default value is @code{0} − PSNR optimal.
 
-@item use_bframe_qp
-Enable the use of the QP from the B-Frames if set to @code{1}. Using this
-option may cause flicker since the B-Frames have often larger QP. Default is
-@code{0} (not enabled).
-
 @end table
 
 @section gblur
diff --git a/libavfilter/vf_fspp.c b/libavfilter/vf_fspp.c
index c6989046c4..b53ae337c9 100644
--- a/libavfilter/vf_fspp.c
+++ b/libavfilter/vf_fspp.c
@@ -48,7 +48,6 @@ static const AVOption fspp_options[] = {
 { "quality",   "set quality",  
OFFSET(log2_count),AV_OPT_TYPE_INT, {.i64 = 4},   4, MAX_LEVEL, FLAGS },
 { "qp","force a constant quantizer parameter", OFFSET(qp), 
   AV_OPT_TYPE_INT, {.i64 = 0},   0, 64,FLAGS },
 { "strength",  "set filter strength",  
OFFSET(strength),  AV_OPT_TYPE_INT, {.i64 = 0}, -15, 32,FLAGS },
-{ "use_bframe_qp", "use B-frames' QP", 
OFFSET(use_bframe_qp), AV_OPT_TYPE_BOOL,{.i64 = 0},   0, 1, FLAGS },
 { NULL }
 };
 
@@ -148,15 +147,12 @@ static void mul_thrmat_c(int16_t *thr_adr_noq, int16_t 
*thr_adr, int q)
 
 static void filter(FSPPContext *p, uint8_t *dst, uint8_t *src,
int dst_stride, int src_stride,
-   int width, int height,
-   uint8_t *qp_store, int qp_stride, int is_luma)
+   int width, int height, int is_luma)
 {
-int x, x0, y, es, qy, t;
+int x, x0, y, es;
 
 const int stride = is_luma ? p->temp_stride : (width + 16);
 const int step = 6 - p->log2_count;
-const int qpsh = 4 - p->hsub * !is_luma;
-const int qpsv = 4 - p->vsub * !is_luma;
 
 DECLARE_ALIGNED(32, int32_t, block_align)[4 * 8 * BLOCKSZ + 4 * 8 * 
BLOCKSZ];
 int16_t *block  = (int16_t *)block_align;
@@ -186,31 +182,14 @@ static void filter(FSPPContext *p, uint8_t *dst, uint8_t 
*src,
 
 for (y = step; y < height + 8; y += step) {//step= 1,2
 const int y1 = y - 8 + step; //l5-7  l4-6;
-qy = y - 4;
 
-if (qy > height - 1) qy = height - 1;
-if (qy < 0) qy = 0;
-
-qy = (qy >> qpsv) * qp_stride;
 p->row_fdct(block, p->src + y * stride + 2 - (y&1), stride, 2);
 
 for (x0 = 0; x0 < width + 8 - 8 * (BLOCKSZ - 1); x0 += 8 * (BLOCKSZ - 
1)) {
 p->row_fdct(block + 8 * 8, p->src + y * stride + 8 + x0 + 2 - 
(y&1), stride, 2 * (BLOCKSZ - 1));
 
-if (p->qp)
-p->column_fidct((int16_t *)(&p->threshold_mtx[0]), block + 0 * 
8, block3 + 0 * 8, 8 * (BLOCKSZ - 1)); //yes, this is a HOTSPOT
-else
-for (x = 0; x < 8 * (BLOCKSZ - 1); x += 8) {
-t = x + x0 - 2;//correct 
t=x+x0-2-(y&1), but its the same
-
-if (t < 0) t = 0;   //t always < width-2
-
-t = qp_store[qy + (t >> qpsh)];
-t = ff_norm_qscale(t, p->qscale_type);
+p->column_fidct((int16_t *)(&p->threshold_mtx[0]), block + 0 * 8, 
block3 + 0 * 8, 8 * (BLOCKSZ - 1)); //yes, this is a HOTSPOT
 
-if (t != p->prev_q) p->prev_q = t, p->mul_thrmat((int16_t 
*)(&p->threshold_mtx_noq[0]), (int16_t *)(&p->threshold_mtx[0]), t);
-p->column_fidct((int16_t *)(&p->threshold_mtx[0]), block + 
x * 8, block3 + x * 8, 8); //yes, this is a HOTSPOT
-}
 p->row_idct(block3 + 0 * 8, p->temp + (y & 15) * stride + x0 + 2 - 
(y & 1), stride, 2 * (BLOCKSZ - 1));
 memmove(block,  block  + (BLOCKSZ - 1) * 64, 8 * 8 * 
sizeof(int16_t)); //cycling
 memmove(block3, block3 + (BLOCKSZ - 1) * 64, 6 * 8 * 
sizeof(int16_t));
@@ -525,13 +504,6 @@ static int config_input(AVFilterLink *inlink)
 if (!fspp->temp || !fspp->src)
 return AVERROR(ENOMEM);
 
-if (!fspp->use_bframe_qp && !fspp->qp) {
-fspp->non_b_qp_alloc_size = AV_CEIL_RSHIFT(inlink->w, 4) * 
AV_CEIL_RSHIFT(inlink->h, 4);
-fspp->non_b_qp_table = av_calloc(fspp->non_b_qp_alloc_size, 
sizeof(*fspp->non_b_qp_table));
-if (!fspp->non_b_

[FFmpeg-devel] [PATCH 09/12] vf_uspp: drop the option to use frame-attached QP tables

2020-02-24 Thread Anton Khirnov
This API has been deprecated for five years.
---
 doc/filters.texi  |  3 +-
 libavfilter/vf_uspp.c | 94 +--
 2 files changed, 19 insertions(+), 78 deletions(-)

diff --git a/doc/filters.texi b/doc/filters.texi
index 5fa1663426..4d17330df5 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -18724,8 +18724,7 @@ that value the speed drops by a factor of approximately 
2.  Default value is
 @code{3}.
 
 @item qp
-Force a constant quantization parameter. If not set, the filter will use the QP
-from the video stream (if available).
+Force a constant quantization parameter.
 @end table
 
 @section v360
diff --git a/libavfilter/vf_uspp.c b/libavfilter/vf_uspp.c
index da4029f4b2..677eeff276 100644
--- a/libavfilter/vf_uspp.c
+++ b/libavfilter/vf_uspp.c
@@ -51,9 +51,6 @@ typedef struct USPPContext {
 AVCodecContext *avctx_enc[BLOCK*BLOCK];
 AVFrame *frame;
 AVFrame *frame_dec;
-uint8_t *non_b_qp_table;
-int non_b_qp_alloc_size;
-int use_bframe_qp;
 } USPPContext;
 
 #define OFFSET(x) offsetof(USPPContext, x)
@@ -61,7 +58,6 @@ typedef struct USPPContext {
 static const AVOption uspp_options[] = {
 { "quality",   "set quality",  
OFFSET(log2_count),AV_OPT_TYPE_INT, {.i64 = 3}, 0, MAX_LEVEL, FLAGS },
 { "qp","force a constant quantizer parameter", OFFSET(qp), 
   AV_OPT_TYPE_INT, {.i64 = 0}, 0, 63,FLAGS },
-{ "use_bframe_qp", "use B-frames' QP", 
OFFSET(use_bframe_qp), AV_OPT_TYPE_BOOL,{.i64 = 0}, 0, 1, FLAGS },
 { NULL }
 };
 
@@ -182,7 +178,7 @@ static void store_slice_c(uint8_t *dst, const uint16_t *src,
 
 static void filter(USPPContext *p, uint8_t *dst[3], uint8_t *src[3],
int dst_stride[3], int src_stride[3], int width,
-   int height, uint8_t *qp_store, int qp_stride)
+   int height)
 {
 int x, y, i, j;
 const int count = 1temp[i], 0, (h + 2 * block) * stride * sizeof(int16_t));
 }
 
-if (p->qp)
-p->frame->quality = p->qp * FF_QP2LAMBDA;
-else {
-int qpsum=0;
-int qpcount = (height>>4) * (height>>4);
+p->frame->quality = p->qp * FF_QP2LAMBDA;
 
-for (y = 0; y < (height>>4); y++) {
-for (x = 0; x < (width>>4); x++)
-qpsum += qp_store[x + y * qp_stride];
-}
-p->frame->quality = ff_norm_qscale((qpsum + qpcount/2) / qpcount, 
p->qscale_type) * FF_QP2LAMBDA;
-}
 //init per MB qscale stuff FIXME
 p->frame->height = height + BLOCK;
 p->frame->width  = width + BLOCK;
@@ -384,68 +370,25 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 AVFilterLink *outlink = ctx->outputs[0];
 AVFrame *out = in;
 
-int qp_stride = 0;
-uint8_t *qp_table = NULL;
-
-/* if we are not in a constant user quantizer mode and we don't want to use
- * the quantizers from the B-frames (B-frames often have a higher QP), we
- * need to save the qp table from the last non B-frame; this is what the
- * following code block does */
-if (!uspp->qp) {
-qp_table = av_frame_get_qp_table(in, &qp_stride, &uspp->qscale_type);
-
-if (qp_table && !uspp->use_bframe_qp && in->pict_type != 
AV_PICTURE_TYPE_B) {
-int w, h;
-
-/* if the qp stride is not set, it means the QP are only defined on
- * a line basis */
-if (!qp_stride) {
-w = AV_CEIL_RSHIFT(inlink->w, 4);
-h = 1;
-} else {
-w = qp_stride;
-h = AV_CEIL_RSHIFT(inlink->h, 4);
-}
-
-if (w * h > uspp->non_b_qp_alloc_size) {
-int ret = av_reallocp_array(&uspp->non_b_qp_table, w, h);
-if (ret < 0) {
-uspp->non_b_qp_alloc_size = 0;
-return ret;
-}
-uspp->non_b_qp_alloc_size = w * h;
-}
-
-av_assert0(w * h <= uspp->non_b_qp_alloc_size);
-memcpy(uspp->non_b_qp_table, qp_table, w * h);
-}
-}
-
 if (uspp->log2_count && !ctx->is_disabled) {
-if (!uspp->use_bframe_qp && uspp->non_b_qp_table)
-qp_table = uspp->non_b_qp_table;
-
-if (qp_table || uspp->qp) {
-
-/* get a new frame if in-place is not possible or if the dimensions
- * are not multiple of 8 */
-if (!av_frame_is_writable(in) || (inlink->w & 7) || (inlink->h & 
7)) {
-const int aligned_w = FFALIGN(inlink->w, 8);
-const int aligned_h = FFALIGN(inlink->h, 8);
-
-out = ff_get_video_buffer(outlink, aligned_w, aligned_h);
-if (!out) {
-av_frame_free(&in);
-return AVERROR(ENOMEM);
-

[FFmpeg-devel] [PATCH 06/12] vf_pp: drop the option to use frame-attached QP tables

2020-02-24 Thread Anton Khirnov
This API has been deprecated for five years.
---
 libavfilter/vf_pp.c |  8 ++--
 tests/fate/filter-video.mak |  3 +--
 tests/ref/fate/filter-pp| 10 --
 3 files changed, 3 insertions(+), 18 deletions(-)
 delete mode 100644 tests/ref/fate/filter-pp

diff --git a/libavfilter/vf_pp.c b/libavfilter/vf_pp.c
index 524ef1bb0a..efc558db8a 100644
--- a/libavfilter/vf_pp.c
+++ b/libavfilter/vf_pp.c
@@ -126,8 +126,6 @@ static int pp_filter_frame(AVFilterLink *inlink, AVFrame 
*inbuf)
 const int aligned_w = FFALIGN(outlink->w, 8);
 const int aligned_h = FFALIGN(outlink->h, 8);
 AVFrame *outbuf;
-int qstride, qp_type;
-int8_t *qp_table ;
 
 outbuf = ff_get_video_buffer(outlink, aligned_w, aligned_h);
 if (!outbuf) {
@@ -137,16 +135,14 @@ static int pp_filter_frame(AVFilterLink *inlink, AVFrame 
*inbuf)
 av_frame_copy_props(outbuf, inbuf);
 outbuf->width  = inbuf->width;
 outbuf->height = inbuf->height;
-qp_table = av_frame_get_qp_table(inbuf, &qstride, &qp_type);
 
 pp_postprocess((const uint8_t **)inbuf->data, inbuf->linesize,
outbuf->data, outbuf->linesize,
aligned_w, outlink->h,
-   qp_table,
-   qstride,
+   NULL, 0,
pp->modes[pp->mode_id],
pp->pp_ctx,
-   outbuf->pict_type | (qp_type ? PP_PICT_TYPE_QP2 : 0));
+   outbuf->pict_type);
 
 av_frame_free(&inbuf);
 return ff_filter_frame(outlink, outbuf);
diff --git a/tests/fate/filter-video.mak b/tests/fate/filter-video.mak
index 5f4fd75b40..c7b60ed94a 100644
--- a/tests/fate/filter-video.mak
+++ b/tests/fate/filter-video.mak
@@ -531,11 +531,10 @@ fate-filter-idet: CMD = framecrc -flags bitexact -idct 
simple -i $(SRC) -vf idet
 FATE_FILTER_VSYNTH-$(CONFIG_PAD_FILTER) += fate-filter-pad
 fate-filter-pad: CMD = video_filter "pad=iw*1.5:ih*1.5:iw*0.3:ih*0.2"
 
-FATE_FILTER_PP = fate-filter-pp fate-filter-pp1 fate-filter-pp4 
fate-filter-pp5 fate-filter-pp6
+FATE_FILTER_PP = fate-filter-pp1 fate-filter-pp4 fate-filter-pp5 
fate-filter-pp6
 FATE_FILTER_VSYNTH-$(CONFIG_PP_FILTER) += $(FATE_FILTER_PP)
 $(FATE_FILTER_PP): fate-vsynth1-mpeg4-qprd
 
-fate-filter-pp:  CMD = framecrc -flags bitexact -idct simple -i 
$(TARGET_PATH)/tests/data/fate/vsynth1-mpeg4-qprd.avi -frames:v 5 -flags 
+bitexact -vf "pp=be/hb/vb/tn/l5/al"
 fate-filter-pp1: CMD = video_filter "pp=fq|4/be/hb/vb/tn/l5/al"
 fate-filter-pp4: CMD = video_filter "pp=be/ci"
 fate-filter-pp5: CMD = video_filter "pp=md"
diff --git a/tests/ref/fate/filter-pp b/tests/ref/fate/filter-pp
deleted file mode 100644
index 5c0e2994c6..00
--- a/tests/ref/fate/filter-pp
+++ /dev/null
@@ -1,10 +0,0 @@
-#tb 0: 1/25
-#media_type 0: video
-#codec_id 0: rawvideo
-#dimensions 0: 352x288
-#sar 0: 1/1
-0,  1,  1,1,   152064, 0x0af8a873
-0,  2,  2,1,   152064, 0xaeb99897
-0,  3,  3,1,   152064, 0x8f3712c8
-0,  4,  4,1,   152064, 0x5bf6a64c
-0,  5,  5,1,   152064, 0x262de352
-- 
2.24.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 07/12] vf_pp7: drop the option to use frame-attached QP tables

2020-02-24 Thread Anton Khirnov
This API has been deprecated for five years.
---
 doc/filters.texi |  3 +--
 libavfilter/vf_pp7.c | 34 ++
 2 files changed, 11 insertions(+), 26 deletions(-)

diff --git a/doc/filters.texi b/doc/filters.texi
index 59571a7022..3b1470ed0f 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -15047,8 +15047,7 @@ The filter accepts the following options:
 @table @option
 @item qp
 Force a constant quantization parameter. It accepts an integer in range
-0 to 63. If not set, the filter will use the QP from the video stream
-(if available).
+0 to 63.
 
 @item mode
 Set thresholding mode. Available modes are:
diff --git a/libavfilter/vf_pp7.c b/libavfilter/vf_pp7.c
index 570a1c90b9..97f5b459cd 100644
--- a/libavfilter/vf_pp7.c
+++ b/libavfilter/vf_pp7.c
@@ -200,7 +200,7 @@ static int softthresh_c(PP7Context *p, int16_t *src, int qp)
 static void filter(PP7Context *p, uint8_t *dst, uint8_t *src,
int dst_stride, int src_stride,
int width, int height,
-   uint8_t *qp_store, int qp_stride, int is_luma)
+   int is_luma)
 {
 int x, y;
 const int stride = is_luma ? p->temp_stride : ((width + 16 + 15) & (~15));
@@ -232,16 +232,11 @@ static void filter(PP7Context *p, uint8_t *dst, uint8_t 
*src,
 dctA_c(tp + 4 * 8, src, stride);
 }
 for (x = 0; x < width; ) {
-const int qps = 3 + is_luma;
 int qp;
 int end = FFMIN(x + 8, width);
 
-if (p->qp)
-qp = p->qp;
-else {
-qp = qp_store[ (FFMIN(x, width - 1) >> qps) + (FFMIN(y, height 
- 1) >> qps) * qp_stride];
-qp = ff_norm_qscale(qp, p->qscale_type);
-}
+qp = p->qp;
+
 for (; x < end; x++) {
 const int index = x + y * stride + (8 - 3) * (1 + stride) + 8; 
//FIXME silly offset
 uint8_t *src = p_src + index;
@@ -321,12 +316,6 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 AVFilterLink *outlink = ctx->outputs[0];
 AVFrame *out = in;
 
-int qp_stride = 0;
-uint8_t *qp_table = NULL;
-
-if (!pp7->qp)
-qp_table = av_frame_get_qp_table(in, &qp_stride, &pp7->qscale_type);
-
 if (!ctx->is_disabled) {
 const int cw = AV_CEIL_RSHIFT(inlink->w, pp7->hsub);
 const int ch = AV_CEIL_RSHIFT(inlink->h, pp7->vsub);
@@ -347,16 +336,13 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 out->height = in->height;
 }
 
-if (qp_table || pp7->qp) {
-
-filter(pp7, out->data[0], in->data[0], out->linesize[0], 
in->linesize[0],
-   inlink->w, inlink->h, qp_table, qp_stride, 1);
-filter(pp7, out->data[1], in->data[1], out->linesize[1], 
in->linesize[1],
-   cw,ch,qp_table, qp_stride, 0);
-filter(pp7, out->data[2], in->data[2], out->linesize[2], 
in->linesize[2],
-   cw,ch,qp_table, qp_stride, 0);
-emms_c();
-}
+filter(pp7, out->data[0], in->data[0], out->linesize[0], 
in->linesize[0],
+   inlink->w, inlink->h, 1);
+filter(pp7, out->data[1], in->data[1], out->linesize[1], 
in->linesize[1],
+   cw,ch,0);
+filter(pp7, out->data[2], in->data[2], out->linesize[2], 
in->linesize[2],
+   cw,ch,0);
+emms_c();
 }
 
 if (in != out) {
-- 
2.24.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] Major bump preparations - lavu

2020-02-24 Thread Anton Khirnov
Hi,
we have discussed previously that we want to do a major bump soon. This
set starts with some preparations for that. After it, lavu should be
ready for the bump.

Please comment
-- 
Anton Khirnov

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 02/12] fifo: hide the definition of AVFifoBuffer in next+1 major bump

2020-02-24 Thread Anton Khirnov
There is no reason whatsoever for it to be public.
---
 libavutil/fifo.c   | 9 +
 libavutil/fifo.h   | 8 ++--
 libavutil/tests/fifo.c | 2 +-
 libavutil/version.h| 3 +++
 4 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/libavutil/fifo.c b/libavutil/fifo.c
index 0baaadc521..f58530a26e 100644
--- a/libavutil/fifo.c
+++ b/libavutil/fifo.c
@@ -25,6 +25,15 @@
 #include "fifo.h"
 #include "version.h"
 
+#if !FF_API_FIFO
+struct AVFifoBuffer
+{
+uint8_t *buffer;
+uint8_t *rptr, *wptr, *end;
+uint32_t rndx, wndx;
+};
+#endif
+
 static AVFifoBuffer *fifo_alloc_common(void *buffer, size_t size)
 {
 AVFifoBuffer *f;
diff --git a/libavutil/fifo.h b/libavutil/fifo.h
index 8cd964ef45..6c0e806c80 100644
--- a/libavutil/fifo.h
+++ b/libavutil/fifo.h
@@ -29,11 +29,15 @@
 #include "attributes.h"
 #include "version.h"
 
-typedef struct AVFifoBuffer {
+typedef struct AVFifoBuffer
+#if FF_API_FIFO
+{
 uint8_t *buffer;
 uint8_t *rptr, *wptr, *end;
 uint32_t rndx, wndx;
-} AVFifoBuffer;
+} AVFifoBuffer
+#endif
+;
 
 /**
  * Initialize an AVFifoBuffer.
diff --git a/libavutil/tests/fifo.c b/libavutil/tests/fifo.c
index 8a550e088b..5982d63bba 100644
--- a/libavutil/tests/fifo.c
+++ b/libavutil/tests/fifo.c
@@ -18,7 +18,7 @@
 
 #include 
 #include 
-#include "libavutil/fifo.h"
+#include "libavutil/fifo.c"
 
 int main(void)
 {
diff --git a/libavutil/version.h b/libavutil/version.h
index 90cc55b9ac..c271e85d29 100644
--- a/libavutil/version.h
+++ b/libavutil/version.h
@@ -129,6 +129,9 @@
 #ifndef FF_API_PSEUDOPAL
 #define FF_API_PSEUDOPAL(LIBAVUTIL_VERSION_MAJOR < 57)
 #endif
+#ifndef FF_API_FIFO
+#define FF_API_FIFO (LIBAVUTIL_VERSION_MAJOR < 58)
+#endif
 
 
 /**
-- 
2.24.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 04/12] vf_codecview: drop qp functionality

2020-02-24 Thread Anton Khirnov
It depends on API that has been deprecated for five years and is of
highly dubious usefulness.
---
 doc/filters.texi   |  3 ---
 libavfilter/vf_codecview.c | 26 --
 2 files changed, 29 deletions(-)

diff --git a/doc/filters.texi b/doc/filters.texi
index 2a1235183f..43e52f930a 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -7182,9 +7182,6 @@ forward predicted MVs of B-frames
 backward predicted MVs of B-frames
 @end table
 
-@item qp
-Display quantization parameters using the chroma planes.
-
 @item mv_type, mvt
 Set motion vectors type to visualize. Includes MVs from all frames unless 
specified by @var{frame_type} option.
 
diff --git a/libavfilter/vf_codecview.c b/libavfilter/vf_codecview.c
index 331bfba777..2657660b97 100644
--- a/libavfilter/vf_codecview.c
+++ b/libavfilter/vf_codecview.c
@@ -50,7 +50,6 @@ typedef struct CodecViewContext {
 unsigned frame_type;
 unsigned mv_type;
 int hsub, vsub;
-int qp;
 } CodecViewContext;
 
 #define OFFSET(x) offsetof(CodecViewContext, x)
@@ -62,7 +61,6 @@ static const AVOption codecview_options[] = {
 CONST("pf", "forward predicted MVs of P-frames",  MV_P_FOR,  "mv"),
 CONST("bf", "forward predicted MVs of B-frames",  MV_B_FOR,  "mv"),
 CONST("bb", "backward predicted MVs of B-frames", MV_B_BACK, "mv"),
-{ "qp", NULL, OFFSET(qp), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, .flags = FLAGS 
},
 { "mv_type", "set motion vectors type", OFFSET(mv_type), 
AV_OPT_TYPE_FLAGS, {.i64=0}, 0, INT_MAX, FLAGS, "mv_type" },
 { "mvt", "set motion vectors type", OFFSET(mv_type), 
AV_OPT_TYPE_FLAGS, {.i64=0}, 0, INT_MAX, FLAGS, "mv_type" },
 CONST("fp", "forward predicted MVs",  MV_TYPE_FOR,  "mv_type"),
@@ -218,30 +216,6 @@ static int filter_frame(AVFilterLink *inlink, AVFrame 
*frame)
 CodecViewContext *s = ctx->priv;
 AVFilterLink *outlink = ctx->outputs[0];
 
-if (s->qp) {
-int qstride, qp_type;
-int8_t *qp_table = av_frame_get_qp_table(frame, &qstride, &qp_type);
-
-if (qp_table) {
-int x, y;
-const int w = AV_CEIL_RSHIFT(frame->width,  s->hsub);
-const int h = AV_CEIL_RSHIFT(frame->height, s->vsub);
-uint8_t *pu = frame->data[1];
-uint8_t *pv = frame->data[2];
-const int lzu = frame->linesize[1];
-const int lzv = frame->linesize[2];
-
-for (y = 0; y < h; y++) {
-for (x = 0; x < w; x++) {
-const int qp = ff_norm_qscale(qp_table[(y >> 3) * qstride 
+ (x >> 3)], qp_type) * 128/31;
-pu[x] = pv[x] = qp;
-}
-pu += lzu;
-pv += lzv;
-}
-}
-}
-
 if (s->mv || s->mv_type) {
 AVFrameSideData *sd = av_frame_get_side_data(frame, 
AV_FRAME_DATA_MOTION_VECTORS);
 if (sd) {
-- 
2.24.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avcodec/magicyuv: Check that there are enough lines for interlacing to be possible

2020-02-24 Thread Paul B Mahol
lgtm

On 2/22/20, Michael Niedermayer  wrote:
> Fixes: out of array access
> Fixes:
> 20763/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MAGICYUV_fuzzer-5759562508664832
>
> Found-by: continuous fuzzing process
> https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
> Signed-off-by: Michael Niedermayer 
> ---
>  libavcodec/magicyuv.c | 11 +++
>  1 file changed, 11 insertions(+)
>
> diff --git a/libavcodec/magicyuv.c b/libavcodec/magicyuv.c
> index 21a32785bc..aacd0d4d7d 100644
> --- a/libavcodec/magicyuv.c
> +++ b/libavcodec/magicyuv.c
> @@ -659,6 +659,17 @@ static int magy_decode_frame(AVCodecContext *avctx,
> void *data,
>  return AVERROR_INVALIDDATA;
>  }
>
> +if (s->interlaced) {
> +if ((s->slice_height >> s->vshift[1]) < 2) {
> +av_log(avctx, AV_LOG_ERROR, "impossible slice height\n");
> +return AVERROR_INVALIDDATA;
> +}
> +if ((avctx->coded_height % s->slice_height) &&
> ((avctx->coded_height % s->slice_height) >> s->vshift[1]) < 2) {
> +av_log(avctx, AV_LOG_ERROR, "impossible height\n");
> +return AVERROR_INVALIDDATA;
> +}
> +}
> +
>  for (i = 0; i < s->planes; i++) {
>  av_fast_malloc(&s->slices[i], &s->slices_size[i], s->nb_slices *
> sizeof(Slice));
>  if (!s->slices[i])
> --
> 2.17.1
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] Add .mailmap

2020-02-24 Thread Thilo Borgmann
Am 23.02.20 um 21:40 schrieb Josh de Kock:
> On Sun, Feb 23, 2020, at 4:07 PM, Thilo Borgmann wrote:
>> [...]
>>
>> How is it automatically generated?
> 
> I wrote a small script to parse author names/emails and group
> emails together based on names. In the future, additions should
> be added manually.

Having that script in tools/ shouldn't hurt, manual updates can get out of sync.
Also authors might be unaware of or forgetting about .mailmap.

If you scriptify the group of people for a general assembly like specified 
during some earlier meeting, that should be committed, too.

.mailmap patch itself LGTM.

Thanks,
Thilo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 8/9] lavc/hevcdec: add 4:2:2 8-bit/10-bit VAAPI decode support

2020-02-24 Thread Carl Eugen Hoyos
Am Mo., 24. Feb. 2020 um 02:09 Uhr schrieb Mark Thompson :
>
> On 24/02/2020 00:37, Carl Eugen Hoyos wrote:
> > Am Mo., 24. Feb. 2020 um 01:25 Uhr schrieb Mark Thompson :
> >
> >> We seem to have agreement that the Y210 / wider YUYV is fine
> >
> > Why do you think so?
> > I was under the impression that we have agreement that this
> > has to be discussed further.
>
> All of the dispute was about the bit-packed formats

That's not how I remembered it and I would like to remind you that
while uncontroversial patches in general need no approval at all
controversial patches can only be approved on the mailing list, not on irc.

(Funny that the link you provided completely contradicts your argument)

Note that I don't really care, I simply remember people arguing in the past
that we have too many pix_fmts and that we recently agreed we want to
discuss with Intel why we have to treat them differently than the other
hardware manufacturers.
This agreement was made after we were asked to meet because it was
claimed we need new mechanisms to discuss technical questions.

Carl Eugen
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] avfilter/vf_program_opencl: allow setting kernel per plane

2020-02-24 Thread Paul B Mahol
Fixes #7190

Signed-off-by: Paul B Mahol 
---
 doc/filters.texi| 22 
 libavfilter/vf_program_opencl.c | 64 ++---
 2 files changed, 65 insertions(+), 21 deletions(-)

diff --git a/doc/filters.texi b/doc/filters.texi
index 70fd7a4cc7..6b10f649b9 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -21302,6 +21302,17 @@ Number of inputs to the filter.  Defaults to 1.
 @item size, s
 Size of output frames.  Defaults to the same as the first input.
 
+@item kernel2
+Kernel name in program for 2nd plane, if not set kernel from option
+@var{kernel} is used.
+
+@item kernel3
+Kernel name in program for 3nd plane, if not set kernel from option
+@var{kernel} is used.
+
+@item kernel4
+Kernel name in program for 4nd plane, if not set kernel from option
+@var{kernel} is used.
 @end table
 
 The program source file must contain a kernel function with the given name,
@@ -22488,6 +22499,17 @@ Pixel format to use for the generated frames.  This 
must be set.
 @item rate, r
 Number of frames generated every second.  Default value is '25'.
 
+@item kernel2
+Kernel name in program for 2nd plane, if not set kernel from option
+@var{kernel} is used.
+
+@item kernel3
+Kernel name in program for 3nd plane, if not set kernel from option
+@var{kernel} is used.
+
+@item kernel4
+Kernel name in program for 4nd plane, if not set kernel from option
+@var{kernel} is used.
 @end table
 
 For details of how the program loading works, see the @ref{program_opencl}
diff --git a/libavfilter/vf_program_opencl.c b/libavfilter/vf_program_opencl.c
index ec25e931f5..f748b15037 100644
--- a/libavfilter/vf_program_opencl.c
+++ b/libavfilter/vf_program_opencl.c
@@ -33,14 +33,14 @@ typedef struct ProgramOpenCLContext {
 
 int loaded;
 cl_uint index;
-cl_kernel   kernel;
+cl_kernel   kernel[4];
 cl_command_queuecommand_queue;
 
 FFFrameSync fs;
 AVFrame   **frames;
 
 const char *source_file;
-const char *kernel_name;
+const char *kernel_name[4];
 int nb_inputs;
 int width, height;
 enum AVPixelFormat  source_format;
@@ -66,15 +66,17 @@ static int program_opencl_load(AVFilterContext *avctx)
 return AVERROR(EIO);
 }
 
-ctx->kernel = clCreateKernel(ctx->ocf.program, ctx->kernel_name, &cle);
-if (!ctx->kernel) {
-if (cle == CL_INVALID_KERNEL_NAME) {
-av_log(avctx, AV_LOG_ERROR, "Kernel function '%s' not found in "
-   "program.\n", ctx->kernel_name);
-} else {
-av_log(avctx, AV_LOG_ERROR, "Failed to create kernel: %d.\n", cle);
+for (int i = 0; i < 4; i++) {
+ctx->kernel[i] = clCreateKernel(ctx->ocf.program, ctx->kernel_name[i] 
? ctx->kernel_name[i] : ctx->kernel_name[0], &cle);
+if (!ctx->kernel[i]) {
+if (cle == CL_INVALID_KERNEL_NAME) {
+av_log(avctx, AV_LOG_ERROR, "Kernel function '%s' not found in 
"
+   "program.\n", ctx->kernel_name[i] ? ctx->kernel_name[i] 
: ctx->kernel_name[0]);
+} else {
+av_log(avctx, AV_LOG_ERROR, "Failed to create kernel%d: 
%d.\n", i, cle);
+}
+return AVERROR(EIO);
 }
-return AVERROR(EIO);
 }
 
 ctx->loaded = 1;
@@ -108,14 +110,14 @@ static int program_opencl_run(AVFilterContext *avctx)
 if (!dst)
 break;
 
-cle = clSetKernelArg(ctx->kernel, 0, sizeof(cl_mem), &dst);
+cle = clSetKernelArg(ctx->kernel[plane], 0, sizeof(cl_mem), &dst);
 if (cle != CL_SUCCESS) {
 av_log(avctx, AV_LOG_ERROR, "Failed to set kernel "
"destination image argument: %d.\n", cle);
 err = AVERROR_UNKNOWN;
 goto fail;
 }
-cle = clSetKernelArg(ctx->kernel, 1, sizeof(cl_uint), &ctx->index);
+cle = clSetKernelArg(ctx->kernel[plane], 1, sizeof(cl_uint), 
&ctx->index);
 if (cle != CL_SUCCESS) {
 av_log(avctx, AV_LOG_ERROR, "Failed to set kernel "
"index argument: %d.\n", cle);
@@ -129,7 +131,7 @@ static int program_opencl_run(AVFilterContext *avctx)
 src = (cl_mem)ctx->frames[input]->data[plane];
 av_assert0(src);
 
-cle = clSetKernelArg(ctx->kernel, 2 + input, sizeof(cl_mem), &src);
+cle = clSetKernelArg(ctx->kernel[plane], 2 + input, 
sizeof(cl_mem), &src);
 if (cle != CL_SUCCESS) {
 av_log(avctx, AV_LOG_ERROR, "Failed to set kernel "
"source image argument %d: %d.\n", input, cle);
@@ -147,7 +149,7 @@ static int program_opencl_run(AVFilterContext *avctx)
"(%"SIZE_SPECIFIER"x%"SIZE_SPECIFIER").\n",
plane, global_work[0], global_work[1]);
 
-cle = clEnqueueNDRangeKernel(ctx->command_queue, ctx->kernel, 2, NULL,
+   

Re: [FFmpeg-devel] [PATCH 3/3 v2] avformat/dashenc: always attempt to enable prft on ldash mode

2020-02-24 Thread Anton Khirnov
Quoting James Almer (2020-02-20 17:26:00)
> Signed-off-by: James Almer 

Commit message is now misleading since it will only enable prft if it's
not disabled.
> ---
> Now it can be overriden if you explicitly set write_prft to 0.
> 
>  libavformat/dashenc.c | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/libavformat/dashenc.c b/libavformat/dashenc.c
> index a52cbc9113..7032adc84d 100644
> --- a/libavformat/dashenc.c
> +++ b/libavformat/dashenc.c
> @@ -1394,6 +1394,12 @@ static int dash_init(AVFormatContext *s)
>  c->frag_type = FRAG_TYPE_EVERY_FRAME;
>  }
>  
> +if (c->write_prft < 0) {
> +c->write_prft = c->ldash;

nit: !!, in case ldash becomes something else than a bool in the future

> +if (c->ldash)
> +av_log(s, AV_LOG_INFO, "Enabling Producer Reference Time element 
> for Low Latency mode\n");

I'd say this should be VERBOSE, since a normal run with no unexpected
events should produce no log output.

Otherwise LGTM.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 2/4] avfilter/vf_dnn_processing.c: use swscale for uint8<->float32 convert

2020-02-24 Thread Guo, Yejun
Signed-off-by: Guo, Yejun 
---
 libavfilter/vf_dnn_processing.c | 81 +++--
 1 file changed, 61 insertions(+), 20 deletions(-)

diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c
index 492df93..4d0ee78 100644
--- a/libavfilter/vf_dnn_processing.c
+++ b/libavfilter/vf_dnn_processing.c
@@ -32,6 +32,7 @@
 #include "dnn_interface.h"
 #include "formats.h"
 #include "internal.h"
+#include "libswscale/swscale.h"
 
 typedef struct DnnProcessingContext {
 const AVClass *class;
@@ -47,6 +48,9 @@ typedef struct DnnProcessingContext {
 // input & output of the model at execution time
 DNNData input;
 DNNData output;
+
+struct SwsContext *sws_gray8_to_grayf32;
+struct SwsContext *sws_grayf32_to_gray8;
 } DnnProcessingContext;
 
 #define OFFSET(x) offsetof(DnnProcessingContext, x)
@@ -211,6 +215,45 @@ static int config_input(AVFilterLink *inlink)
 return 0;
 }
 
+static int prepare_sws_context(AVFilterLink *outlink)
+{
+AVFilterContext *context = outlink->src;
+DnnProcessingContext *ctx = context->priv;
+AVFilterLink *inlink = context->inputs[0];
+enum AVPixelFormat fmt = inlink->format;
+DNNDataType input_dt  = ctx->input.dt;
+DNNDataType output_dt = ctx->output.dt;
+
+switch (fmt) {
+case AV_PIX_FMT_RGB24:
+case AV_PIX_FMT_BGR24:
+if (input_dt == DNN_FLOAT) {
+ctx->sws_gray8_to_grayf32 = sws_getContext(inlink->w * 3,
+   inlink->h,
+   AV_PIX_FMT_GRAY8,
+   inlink->w * 3,
+   inlink->h,
+   AV_PIX_FMT_GRAYF32,
+   0, NULL, NULL, NULL);
+}
+if (output_dt == DNN_FLOAT) {
+ctx->sws_grayf32_to_gray8 = sws_getContext(outlink->w * 3,
+   outlink->h,
+   AV_PIX_FMT_GRAYF32,
+   outlink->w * 3,
+   outlink->h,
+   AV_PIX_FMT_GRAY8,
+   0, NULL, NULL, NULL);
+}
+return 0;
+default:
+//do nothing
+break;
+}
+
+return 0;
+}
+
 static int config_output(AVFilterLink *outlink)
 {
 AVFilterContext *context = outlink->src;
@@ -227,25 +270,23 @@ static int config_output(AVFilterLink *outlink)
 outlink->w = ctx->output.width;
 outlink->h = ctx->output.height;
 
+prepare_sws_context(outlink);
+
 return 0;
 }
 
-static int copy_from_frame_to_dnn(DNNData *dnn_input, const AVFrame *frame)
+static int copy_from_frame_to_dnn(DnnProcessingContext *ctx, const AVFrame 
*frame)
 {
 int bytewidth = av_image_get_linesize(frame->format, frame->width, 0);
+DNNData *dnn_input = &ctx->input;
 
 switch (frame->format) {
 case AV_PIX_FMT_RGB24:
 case AV_PIX_FMT_BGR24:
 if (dnn_input->dt == DNN_FLOAT) {
-float *dnn_input_data = dnn_input->data;
-for (int i = 0; i < frame->height; i++) {
-for(int j = 0; j < frame->width * 3; j++) {
-int k = i * frame->linesize[0] + j;
-int t = i * frame->width * 3 + j;
-dnn_input_data[t] = frame->data[0][k] / 255.0f;
-}
-}
+sws_scale(ctx->sws_gray8_to_grayf32, (const uint8_t 
**)frame->data, frame->linesize,
+  0, frame->height, (uint8_t * const*)(&dnn_input->data),
+  (const int [4]){frame->linesize[0] * sizeof(float), 0, 
0, 0});
 } else {
 av_assert0(dnn_input->dt == DNN_UINT8);
 av_image_copy_plane(dnn_input->data, bytewidth,
@@ -266,22 +307,19 @@ static int copy_from_frame_to_dnn(DNNData *dnn_input, 
const AVFrame *frame)
 return 0;
 }
 
-static int copy_from_dnn_to_frame(AVFrame *frame, const DNNData *dnn_output)
+static int copy_from_dnn_to_frame(DnnProcessingContext *ctx, AVFrame *frame)
 {
 int bytewidth = av_image_get_linesize(frame->format, frame->width, 0);
+DNNData *dnn_output = &ctx->output;
 
 switch (frame->format) {
 case AV_PIX_FMT_RGB24:
 case AV_PIX_FMT_BGR24:
 if (dnn_output->dt == DNN_FLOAT) {
-float *dnn_output_data = dnn_output->data;
-for (int i = 0; i < frame->height; i++) {
-for(int j = 0; j < frame->width * 3; j++) {
-int k = i * frame->linesize[0] + j;
-int t = i * frame->width * 3 + j;
-frame->data[0][k] = 
av_clip_uintp2((int)(dnn_output_data[t] * 255.0f), 8);
-}
-

[FFmpeg-devel] [PATCH 4/4] avfilter/vf_dnn_processing.c: add frame size change support for planar yuv format

2020-02-24 Thread Guo, Yejun
The Y channel is handled by dnn, and also resized by dnn. The UV channels
are resized with swscale.

The command to use espcn.pb (see vf_sr) looks like:
./ffmpeg -i 480p.jpg -vf 
format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:input=x:output=y
 -y tmp.espcn.jpg

Signed-off-by: Guo, Yejun 
---
 doc/filters.texi|  8 
 libavfilter/vf_dnn_processing.c | 37 ++---
 2 files changed, 38 insertions(+), 7 deletions(-)

diff --git a/doc/filters.texi b/doc/filters.texi
index 71ea822..00a2e5c 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -9201,6 +9201,12 @@ Handle the Y channel with srcnn.pb (see  @ref{sr}) for 
frame with yuv420p (plana
 ./ffmpeg -i 480p.jpg -vf 
format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y
 -y srcnn.jpg
 @end example
 
+@item
+Handle the Y channel with espcn.pb (see  @ref{sr}), which changes frame size, 
for format yuv420p (planar YUV formats supported):
+@example
+./ffmpeg -i 480p.jpg -vf 
format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:input=x:output=y
 -y tmp.espcn.jpg
+@end example
+
 @end itemize
 
 @section drawbox
@@ -17353,6 +17359,8 @@ Default value is @code{2}. Scale factor is necessary 
for SRCNN model, because it
 input upscaled using bicubic upscaling with proper scale factor.
 @end table
 
+This feature can also be finished with  @ref{dnn_processing}.
+
 @section ssim
 
 Obtain the SSIM (Structural SImilarity Metric) between two input videos.
diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c
index f9458f0..7f40f85 100644
--- a/libavfilter/vf_dnn_processing.c
+++ b/libavfilter/vf_dnn_processing.c
@@ -51,6 +51,8 @@ typedef struct DnnProcessingContext {
 
 struct SwsContext *sws_gray8_to_grayf32;
 struct SwsContext *sws_grayf32_to_gray8;
+struct SwsContext *sws_uv_scale;
+int sws_uv_height;
 } DnnProcessingContext;
 
 #define OFFSET(x) offsetof(DnnProcessingContext, x)
@@ -274,6 +276,18 @@ static int prepare_sws_context(AVFilterLink *outlink)
outlink->h,
AV_PIX_FMT_GRAY8,
0, NULL, NULL, NULL);
+
+if (inlink->w != outlink->w || inlink->h != outlink->h) {
+const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(fmt);
+int sws_src_h = AV_CEIL_RSHIFT(inlink->h, desc->log2_chroma_h);
+int sws_src_w = AV_CEIL_RSHIFT(inlink->w, desc->log2_chroma_w);
+int sws_dst_h = AV_CEIL_RSHIFT(outlink->h, desc->log2_chroma_h);
+int sws_dst_w = AV_CEIL_RSHIFT(outlink->w, desc->log2_chroma_w);
+ctx->sws_uv_scale = sws_getContext(sws_src_w, sws_src_h, 
AV_PIX_FMT_GRAY8,
+   sws_dst_w, sws_dst_h, 
AV_PIX_FMT_GRAY8,
+   SWS_BICUBIC, NULL, NULL, NULL);
+ctx->sws_uv_height = sws_src_h;
+}
 return 0;
 default:
 //do nothing
@@ -404,13 +418,21 @@ static av_always_inline int isPlanarYUV(enum 
AVPixelFormat pix_fmt)
 
 static int copy_uv_planes(DnnProcessingContext *ctx, AVFrame *out, const 
AVFrame *in)
 {
-const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(in->format);
-int uv_height = AV_CEIL_RSHIFT(in->height, desc->log2_chroma_h);
-for (int i = 1; i < 3; ++i) {
-int bytewidth = av_image_get_linesize(in->format, in->width, i);
-av_image_copy_plane(out->data[i], out->linesize[i],
-in->data[i], in->linesize[i],
-bytewidth, uv_height);
+if (!ctx->sws_uv_scale) {
+av_assert0(in->height == out->height && in->width == out->width);
+const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(in->format);
+int uv_height = AV_CEIL_RSHIFT(in->height, desc->log2_chroma_h);
+for (int i = 1; i < 3; ++i) {
+int bytewidth = av_image_get_linesize(in->format, in->width, i);
+av_image_copy_plane(out->data[i], out->linesize[i],
+in->data[i], in->linesize[i],
+bytewidth, uv_height);
+}
+} else {
+sws_scale(ctx->sws_uv_scale, (const uint8_t **)(in->data + 1), 
in->linesize + 1,
+  0, ctx->sws_uv_height, out->data + 1, out->linesize + 1);
+sws_scale(ctx->sws_uv_scale, (const uint8_t **)(in->data + 2), 
in->linesize + 2,
+  0, ctx->sws_uv_height, out->data + 2, out->linesize + 2);
 }
 
 return 0;
@@ -455,6 +477,7 @@ static av_cold void uninit(AVFilterContext *ctx)
 
 sws_freeContext(context->sws_gray8_to_grayf32);
 sws_freeContext(context->sws_grayf32_to_gray8);
+sws_freeContext(context->sws_uv_scale);
 
 if (context->dnn_module)
 (context->dnn_module->free_model)(&context->model);
-- 
2.

[FFmpeg-devel] [PATCH 3/4] avfilter/vf_dnn_processing.c: add planar yuv format support

2020-02-24 Thread Guo, Yejun
Only the Y channel is handled by dnn, the UV channels are copied
without changes.

The command to use srcnn.pb (see vf_sr) looks like:
./ffmpeg -i 480p.jpg -vf 
format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y
 -y srcnn.jpg

Signed-off-by: Guo, Yejun 
---
 doc/filters.texi|  8 +
 libavfilter/vf_dnn_processing.c | 72 +
 2 files changed, 80 insertions(+)

diff --git a/doc/filters.texi b/doc/filters.texi
index 70fd7a4..71ea822 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -9180,6 +9180,8 @@ Set the output name of the dnn network.
 
 @end table
 
+@subsection Examples
+
 @itemize
 @item
 Halve the red channle of the frame with format rgb24:
@@ -9193,6 +9195,12 @@ Halve the pixel value of the frame with format gray32f:
 ffmpeg -i input.jpg -vf 
format=grayf32,dnn_processing=model=halve_gray_float.model:input=dnn_in:output=dnn_out:dnn_backend=native
 -y out.native.png
 @end example
 
+@item
+Handle the Y channel with srcnn.pb (see  @ref{sr}) for frame with yuv420p 
(planar YUV formats supported):
+@example
+./ffmpeg -i 480p.jpg -vf 
format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y
 -y srcnn.jpg
+@end example
+
 @end itemize
 
 @section drawbox
diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c
index 4d0ee78..f9458f0 100644
--- a/libavfilter/vf_dnn_processing.c
+++ b/libavfilter/vf_dnn_processing.c
@@ -110,6 +110,8 @@ static int query_formats(AVFilterContext *context)
 static const enum AVPixelFormat pix_fmts[] = {
 AV_PIX_FMT_RGB24, AV_PIX_FMT_BGR24,
 AV_PIX_FMT_GRAY8, AV_PIX_FMT_GRAYF32,
+AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV422P,
+AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUV410P, AV_PIX_FMT_YUV411P,
 AV_PIX_FMT_NONE
 };
 AVFilterFormats *fmts_list = ff_make_format_list(pix_fmts);
@@ -163,6 +165,11 @@ static int check_modelinput_inlink(const DNNData 
*model_input, const AVFilterLin
 }
 return 0;
 case AV_PIX_FMT_GRAYF32:
+case AV_PIX_FMT_YUV420P:
+case AV_PIX_FMT_YUV422P:
+case AV_PIX_FMT_YUV444P:
+case AV_PIX_FMT_YUV410P:
+case AV_PIX_FMT_YUV411P:
 if (model_input->channels != 1) {
 LOG_FORMAT_CHANNEL_MISMATCH();
 return AVERROR(EIO);
@@ -246,6 +253,28 @@ static int prepare_sws_context(AVFilterLink *outlink)
0, NULL, NULL, NULL);
 }
 return 0;
+case AV_PIX_FMT_YUV420P:
+case AV_PIX_FMT_YUV422P:
+case AV_PIX_FMT_YUV444P:
+case AV_PIX_FMT_YUV410P:
+case AV_PIX_FMT_YUV411P:
+av_assert0(input_dt == DNN_FLOAT);
+av_assert0(output_dt == DNN_FLOAT);
+ctx->sws_gray8_to_grayf32 = sws_getContext(inlink->w,
+   inlink->h,
+   AV_PIX_FMT_GRAY8,
+   inlink->w,
+   inlink->h,
+   AV_PIX_FMT_GRAYF32,
+   0, NULL, NULL, NULL);
+ctx->sws_grayf32_to_gray8 = sws_getContext(outlink->w,
+   outlink->h,
+   AV_PIX_FMT_GRAYF32,
+   outlink->w,
+   outlink->h,
+   AV_PIX_FMT_GRAY8,
+   0, NULL, NULL, NULL);
+return 0;
 default:
 //do nothing
 break;
@@ -300,6 +329,15 @@ static int copy_from_frame_to_dnn(DnnProcessingContext 
*ctx, const AVFrame *fram
 frame->data[0], frame->linesize[0],
 bytewidth, frame->height);
 return 0;
+case AV_PIX_FMT_YUV420P:
+case AV_PIX_FMT_YUV422P:
+case AV_PIX_FMT_YUV444P:
+case AV_PIX_FMT_YUV410P:
+case AV_PIX_FMT_YUV411P:
+sws_scale(ctx->sws_gray8_to_grayf32, (const uint8_t **)frame->data, 
frame->linesize,
+  0, frame->height, (uint8_t * const*)(&dnn_input->data),
+  (const int [4]){frame->width * sizeof(float), 0, 0, 0});
+return 0;
 default:
 return AVERROR(EIO);
 }
@@ -341,6 +379,15 @@ static int copy_from_dnn_to_frame(DnnProcessingContext 
*ctx, AVFrame *frame)
 dnn_output->data, bytewidth,
 bytewidth, frame->height);
 return 0;
+case AV_PIX_FMT_YUV420P:
+case AV_PIX_FMT_YUV422P:
+case AV_PIX_FMT_YUV444P:
+case AV_PIX_FMT_YUV410P:
+case AV_PIX_FMT_YUV411P:
+sws_scale(ctx->sws_grayf32_to_gray8, (const uint8_t *[4]){(const 
uint8_t *)dn

[FFmpeg-devel] [PATCH 1/4] avfilter/vf_sr.c: refine code to use AVPixFmtDescriptor.log2_chroma_h/w

2020-02-24 Thread Guo, Yejun
Signed-off-by: Guo, Yejun 
---
 libavfilter/vf_sr.c | 40 ++--
 1 file changed, 6 insertions(+), 34 deletions(-)

diff --git a/libavfilter/vf_sr.c b/libavfilter/vf_sr.c
index 562b030..f000eda 100644
--- a/libavfilter/vf_sr.c
+++ b/libavfilter/vf_sr.c
@@ -176,40 +176,12 @@ static int config_props(AVFilterLink *inlink)
 sr_context->sws_slice_h = inlink->h;
 } else {
 if (inlink->format != AV_PIX_FMT_GRAY8){
-sws_src_h = sr_context->input.height;
-sws_src_w = sr_context->input.width;
-sws_dst_h = sr_context->output.height;
-sws_dst_w = sr_context->output.width;
-
-switch (inlink->format){
-case AV_PIX_FMT_YUV420P:
-sws_src_h = AV_CEIL_RSHIFT(sws_src_h, 1);
-sws_src_w = AV_CEIL_RSHIFT(sws_src_w, 1);
-sws_dst_h = AV_CEIL_RSHIFT(sws_dst_h, 1);
-sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 1);
-break;
-case AV_PIX_FMT_YUV422P:
-sws_src_w = AV_CEIL_RSHIFT(sws_src_w, 1);
-sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 1);
-break;
-case AV_PIX_FMT_YUV444P:
-break;
-case AV_PIX_FMT_YUV410P:
-sws_src_h = AV_CEIL_RSHIFT(sws_src_h, 2);
-sws_src_w = AV_CEIL_RSHIFT(sws_src_w, 2);
-sws_dst_h = AV_CEIL_RSHIFT(sws_dst_h, 2);
-sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 2);
-break;
-case AV_PIX_FMT_YUV411P:
-sws_src_w = AV_CEIL_RSHIFT(sws_src_w, 2);
-sws_dst_w = AV_CEIL_RSHIFT(sws_dst_w, 2);
-break;
-default:
-av_log(context, AV_LOG_ERROR,
-   "could not create SwsContext for scaling for given 
input pixel format: %s\n",
-   av_get_pix_fmt_name(inlink->format));
-return AVERROR(EIO);
-}
+const AVPixFmtDescriptor *desc = 
av_pix_fmt_desc_get(inlink->format);
+sws_src_h = AV_CEIL_RSHIFT(sr_context->input.height, 
desc->log2_chroma_h);
+sws_src_w = AV_CEIL_RSHIFT(sr_context->input.width, 
desc->log2_chroma_w);
+sws_dst_h = AV_CEIL_RSHIFT(sr_context->output.height, 
desc->log2_chroma_h);
+sws_dst_w = AV_CEIL_RSHIFT(sr_context->output.width, 
desc->log2_chroma_w);
+
 sr_context->sws_contexts[0] = sws_getContext(sws_src_w, sws_src_h, 
AV_PIX_FMT_GRAY8,
  sws_dst_w, sws_dst_h, 
AV_PIX_FMT_GRAY8,
  SWS_BICUBIC, NULL, 
NULL, NULL);
-- 
2.7.4

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".