Re: [FFmpeg-devel] [PATCH 7/7] Handle AVID MJPEG streams directly in the MJPEG decoder.

2020-12-14 Thread James Almer

On 12/14/2020 9:13 PM, Michael Niedermayer wrote:

On Sat, Dec 12, 2020 at 04:45:55PM +0100, Anton Khirnov wrote:

AVID streams, currently handled by the AVRN decoder can be (depending on
extradata contents) either MJPEG or raw video. To decode the MJPEG
variant, the AVRN decoder currently instantiates a MJPEG decoder
internally and forwards decoded frames to the caller (possibly after
cropping them).

This is suboptimal, because the AVRN decoder does not forward all the
features of the internal MJPEG decoder, such as direct rendering.
Handling such forwarding in a full and generic manner would be quite
hard, so it is simpler to just handle those streams in the MJPEG decoder
directly.

The AVRN decoder, which now handles only the raw streams, can now be
marked as supporting direct rendering.

This also removes the last remaining internal use of the obsolete
decoding API.
---
  libavcodec/avrndec.c  | 74 ++-
  libavcodec/mjpegdec.c | 11 +++
  libavcodec/version.h  |  2 +-
  libavformat/avidec.c  |  6 
  libavformat/isom.c|  2 +-
  libavformat/version.h |  2 +-
  tests/fate/video.mak  |  2 +-
  7 files changed, 23 insertions(+), 76 deletions(-)


breaks:
./ffmpeg -i ~/tickets/1527/24bpp.mov  whatever.mov

...
Press [q] to stop, [?] for help
[avrn @ 0x5625d111bbc0] packet too small
Error while decoding stream #0:0: Invalid data found when processing input
[avrn @ 0x5625d111bbc0] packet too small
Error while decoding stream #0:0: Invalid data found when processing input
[avrn @ 0x5625d111bbc0] packet too small
Error while decoding stream #0:0: Invalid data found when processing input
[avrn @ 0x5625d111bbc0] packet too small


Reenabling...


diff --git a/libavformat/isom.c b/libavformat/isom.c
index d1ef6e3407..db84bb417b 100644
--- a/libavformat/isom.c
+++ b/libavformat/isom.c
@@ -115,7 +115,7 @@ const AVCodecTag ff_codec_movvideo_tags[] = {
 
 { AV_CODEC_ID_MJPEG,  MKTAG('j', 'p', 'e', 'g') }, /* PhotoJPEG */

 { AV_CODEC_ID_MJPEG,  MKTAG('m', 'j', 'p', 'a') }, /* Motion-JPEG (format 
A) */
-{ AV_CODEC_ID_AVRN ,  MKTAG('A', 'V', 'D', 'J') }, /* MJPEG with 
alpha-channel (AVID JFIF meridien compressed) */
+{ AV_CODEC_ID_MJPEG,  MKTAG('A', 'V', 'D', 'J') }, /* MJPEG with 
alpha-channel (AVID JFIF meridien compressed) */
 /*  { AV_CODEC_ID_MJPEG,  MKTAG('A', 'V', 'R', 'n') }, *//* MJPEG with 
alpha-channel (AVID ABVB/Truevision NuVista) */


...this line here seems to fix it (The riff.c entry for AVRn selects the 
avrn decoder otherwise).

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 7/7] Handle AVID MJPEG streams directly in the MJPEG decoder.

2020-12-14 Thread Michael Niedermayer
On Sat, Dec 12, 2020 at 04:45:55PM +0100, Anton Khirnov wrote:
> AVID streams, currently handled by the AVRN decoder can be (depending on
> extradata contents) either MJPEG or raw video. To decode the MJPEG
> variant, the AVRN decoder currently instantiates a MJPEG decoder
> internally and forwards decoded frames to the caller (possibly after
> cropping them).
> 
> This is suboptimal, because the AVRN decoder does not forward all the
> features of the internal MJPEG decoder, such as direct rendering.
> Handling such forwarding in a full and generic manner would be quite
> hard, so it is simpler to just handle those streams in the MJPEG decoder
> directly.
> 
> The AVRN decoder, which now handles only the raw streams, can now be
> marked as supporting direct rendering.
> 
> This also removes the last remaining internal use of the obsolete
> decoding API.
> ---
>  libavcodec/avrndec.c  | 74 ++-
>  libavcodec/mjpegdec.c | 11 +++
>  libavcodec/version.h  |  2 +-
>  libavformat/avidec.c  |  6 
>  libavformat/isom.c|  2 +-
>  libavformat/version.h |  2 +-
>  tests/fate/video.mak  |  2 +-
>  7 files changed, 23 insertions(+), 76 deletions(-)

breaks:
./ffmpeg -i ~/tickets/1527/24bpp.mov  whatever.mov

...
Press [q] to stop, [?] for help
[avrn @ 0x5625d111bbc0] packet too small
Error while decoding stream #0:0: Invalid data found when processing input
[avrn @ 0x5625d111bbc0] packet too small
Error while decoding stream #0:0: Invalid data found when processing input
[avrn @ 0x5625d111bbc0] packet too small
Error while decoding stream #0:0: Invalid data found when processing input
[avrn @ 0x5625d111bbc0] packet too small
...




[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

"I am not trying to be anyone's saviour, I'm trying to think about the
 future and not be sad" - Elon Musk



signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Call for maintainers: vf_uspp, vf_mcdeint

2020-12-14 Thread Michael Niedermayer
On Mon, Dec 14, 2020 at 11:41:58AM +0100, Anton Khirnov wrote:
> Quoting Michael Niedermayer (2020-12-14 00:52:06)
> > On Sun, Dec 13, 2020 at 06:22:08PM +0100, Anton Khirnov wrote:
> > > Quoting Michael Niedermayer (2020-12-13 15:03:19)
> > > > On Sun, Dec 13, 2020 at 02:02:33PM +0100, Anton Khirnov wrote:
> > > > > Quoting Paul B Mahol (2020-12-13 13:40:15)
> > > > > > Why? Is it so hard to fix them work with latest API?
> > > > > 
> > > > > It is not exactly obvious, since coded_frame is gone. I suppose you
> > > > > could instantiate an encoder and a decoder to work around that, but it
> > > > > all seems terribly inefficient. Lavfi seems to have some ME code, so
> > > > > perhaps that could be used for mcdeint. Or if not, maybe someone could
> > > > > get motivated to port something from avisynth or vapoursynth. 
> > > > > Similarly
> > > > > for uspp, surely one can do a snow-like blur without requiring a whole
> > > > > encoder.
> > > > > 
> > > > > In any case, seems to me like a good opportunity to find out whether
> > > > > anyone cares enough about those filters to keep them alive. I don't
> > > > > think we should keep code that nobody is willing to maintain.
> > > > 
> > > > I might do the minimal changes needed to keep these working when i 
> > > > find the time and if noone else does. Certainly i would not be sad
> > > > if someone else would do it before me ;)
> > > > 
> > > > Also if redesign happens, what looks interresting to me would be to
> > > > be able to export the needed information from encoders.
> > > > Factorizing code from one specific encoder so only it can be used
> > > > is less general but could be done too of course
> > > > 
> > > > if OTOH encoders in general could export their internal buffers for 
> > > > filters
> > > > or debuging that seems more interresting. 
> > > 
> > > TBH I am very skeptical that this can be done in a clean and
> > > maintainable way. 
> > 
> > why ?
> > one could simply attach the decoded frame bitmap as side data to the
> > packet. This seems at the surface at least not really require anything
> > anywhere else. Its just like any other side data, just that it
> > would be done only when requested by the user.
> > I imagine this might be little more than a single call in a encoder
> > with the AVFrame and AVPacket as arguments ...
> > 
> > 
> > > Splitting off individual pieces and making them
> > > reusable is a better approach.
> > 
> > Better for these 2 specific filters yes but that also makes it harder
> > to change them to a different encoder or even encoder settings.
> > 
> > as the filters are currently, it would be reasonable easy to change them to
> > a different encoder, experiment around with them and things like that.
> 
> I am not convinced that passing video through an entire encoder is a
> meaningful filtering method, if one wants specific and well-defined
> results. Not to mention it will most likely be incredibly slow.

I think "incredibly" is not accurate but more important
there are 2 rather different cases. (if we limit ourselfs to the wavelet 
postprocess)

the first is to apply a specific wavelet postprocess filter.
Your comments make sense for this use case

the second is to have a generic filter which averages cyclically shifted 
encodings
of an image. 

I do not know what people do with this code/filter. So i can just speak about
myself.

Am i interrested in applying uspp to some video ? maybe

do i care about the speed of this ? i dont think so, i do care in the case of 
other
pp filters though but not uspp

Am i interrested in testing a wide range of encoders with this cyclic shift 
averaging
or maybe even extend this and try something else entirely than averaging to 
maybe
get some other cool effect unrelated to postprocess. Yes definitly if i find 
the time,
its just that maybe i will not find the time ...
Another idea with a encoder + decoder filter would be to introduce intentional
errors after the encoder to create artifacts, if multiple such streams are 
averaged
this could also generate an interresting effect.
In this sense the principle of running this through a full encoder-decoder chain
certainly would have users. There are people that use such intentional damage 
for
artistic purposes.

Thanks

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

If a bugfix only changes things apparently unrelated to the bug with no
further explanation, that is a good sign that the bugfix is wrong.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH v2] In order to fine-control referencing schemes in VP9 encoding, there is a need to use VP9E_SET_SVC_REF_FRAME_CONFIG method. This commit provides a way to use the API through f

2020-12-14 Thread Wonkap Jang
---
 doc/encoders.texi  | 32 +
 libavcodec/libvpxenc.c | 79 ++
 2 files changed, 111 insertions(+)

diff --git a/doc/encoders.texi b/doc/encoders.texi
index 0b1c69e982..aa3a2221b6 100644
--- a/doc/encoders.texi
+++ b/doc/encoders.texi
@@ -2129,6 +2129,38 @@ midpoint is passed in rather than calculated for a 
specific clip or chunk.
 The valid range is [0, 1]. 0 (default) uses standard VBR.
 @item enable-tpl @var{boolean}
 Enable temporal dependency model.
+@item ref-frame-config
+Using per-frame metadata, set members of the structure 
@code{vpx_svc_ref_frame_config_t} in @code{vpx/vp8cx.h} to fine-control 
referencing schemes and frame buffer management.
+@*Use a :-separated list of key=value pairs.
+For example, 
+@example
+av_dict_set(&av_frame->metadata, "ref-frame-config", \
+"rfc_update_buffer_slot=7:rfc_lst_fb_idx=0:rfc_gld_fb_idx=1:rfc_alt_fb_idx=2:rfc_reference_last=0:rfc_reference_golden=0:rfc_reference_alt_ref=0");}
+@end example
+@table @option
+@item rfc_update_buffer_slot
+Indicates the buffer slot number to update
+@item rfc_update_last
+Indicates whether to update the LAST frame
+@item rfc_update_golden
+Indicates whether to update GOLDEN frame
+@item rfc_update_alt_ref
+Indicates whether to update ALT_REF frame
+@item rfc_lst_fb_idx
+LAST frame buffer index
+@item rfc_gld_fb_idx
+GOLDEN frame buffer index
+@item rfc_alt_fb_idx
+ALT_REF frame buffer index
+@item rfc_reference_last
+Indicates whetehr to reference LAST frame
+@item rfc_reference_golden
+Indicates whether to reference GOLDEN frame
+@item rfc_reference_alt_ref
+Indicates whether to reference ALT_REF frame
+@item rfc_reference_duration
+Indicates frame duration
+@end table
 @end table
 
 @end table
diff --git a/libavcodec/libvpxenc.c b/libavcodec/libvpxenc.c
index a7c76eb835..345c71cd22 100644
--- a/libavcodec/libvpxenc.c
+++ b/libavcodec/libvpxenc.c
@@ -125,6 +125,11 @@ typedef struct VPxEncoderContext {
  * encounter a frame with ROI side data.
  */
 int roi_warned;
+
+#if CONFIG_LIBVPX_VP9_ENCODER && defined 
(VPX_CTRL_VP9E_SET_MAX_INTER_BITRATE_PCT)
+vpx_svc_ref_frame_config_t ref_frame_config;
+AVDictionary *vpx_ref_frame_config;
+#endif
 } VPxContext;
 
 /** String mappings for enum vp8e_enc_control_id */
@@ -152,6 +157,9 @@ static const char *const ctlidstr[] = {
 [VP9E_SET_SVC_LAYER_ID]= "VP9E_SET_SVC_LAYER_ID",
 #if VPX_ENCODER_ABI_VERSION >= 12
 [VP9E_SET_SVC_PARAMETERS]  = "VP9E_SET_SVC_PARAMETERS",
+#if defined (VPX_CTRL_VP9E_SET_MAX_INTER_BITRATE_PCT)
+[VP9E_SET_SVC_REF_FRAME_CONFIG]= "VP9E_SET_SVC_REF_FRAME_CONFIG",
+#endif
 #endif
 [VP9E_SET_SVC] = "VP9E_SET_SVC",
 #if VPX_ENCODER_ABI_VERSION >= 11
@@ -394,6 +402,18 @@ static void vp8_ts_parse_int_array(int *dest, char *value, 
size_t value_len, int
 }
 }
 
+static void vp8_ts_parse_int64_array(int64_t *dest, char *value, size_t 
value_len, int max_entries)
+{
+int dest_idx = 0;
+char *saveptr = NULL;
+char *token = av_strtok(value, ",", &saveptr);
+
+while (token && dest_idx < max_entries) {
+dest[dest_idx++] = strtoull(token, NULL, 10);
+token = av_strtok(NULL, ",", &saveptr);
+}
+}
+
 static void set_temporal_layer_pattern(int layering_mode, vpx_codec_enc_cfg_t 
*cfg,
int *layer_flags, int *flag_periodicity)
 {
@@ -541,6 +561,48 @@ static int vpx_ts_param_parse(VPxContext *ctx, struct 
vpx_codec_enc_cfg *enccfg,
 return 0;
 }
 
+#if CONFIG_LIBVPX_VP9_ENCODER && defined 
(VPX_CTRL_VP9E_SET_MAX_INTER_BITRATE_PCT)
+static int vpx_ref_frame_config_parse(VPxContext *ctx, const struct 
vpx_codec_enc_cfg *enccfg,
+  char *key, char *value, enum AVCodecID codec_id)
+{
+size_t value_len = strlen(value);
+int ss_number_layers = enccfg->ss_number_layers;
+vpx_svc_ref_frame_config_t *ref_frame_config = &ctx->ref_frame_config;
+
+if (!value_len)
+return -1;
+
+if (codec_id != AV_CODEC_ID_VP9)
+return -1;
+
+if (!strcmp(key, "rfc_update_buffer_slot")) {
+vp8_ts_parse_int_array(ref_frame_config->update_buffer_slot, value, 
value_len, ss_number_layers);
+} else if (!strcmp(key, "rfc_update_last")) {
+vp8_ts_parse_int_array(ref_frame_config->update_last, value, 
value_len, ss_number_layers);
+} else if (!strcmp(key, "rfc_update_golden")) {
+vp8_ts_parse_int_array(ref_frame_config->update_golden, value, 
value_len, ss_number_layers);
+} else if (!strcmp(key, "rfc_update_alt_ref")) {
+vp8_ts_parse_int_array(ref_frame_config->update_alt_ref, value, 
value_len, ss_number_layers);
+} else if (!strcmp(key, "rfc_lst_fb_idx")) {
+vp8_ts_parse_int_array(ref_frame_config->lst_fb_idx, value, value_len, 
ss_number_layers);
+} else if (!strcmp(key, "rfc_gld_fb_idx")) {
+vp8_ts_parse_int_array(ref_frame_config->gld_fb_idx, value, value

Re: [FFmpeg-devel] [PATCH] avformat/movenc: Remove dts delay from duration.

2020-12-14 Thread Martin Storsjö

Hi,

On Fri, 11 Dec 2020, Josh Allmann wrote:


On Fri, 11 Dec 2020 at 14:07, Martin Storsjö  wrote:


On Fri, 11 Dec 2020, Josh Allmann wrote:

> A negative start_dts value (eg, delay from edit lists) typically yields
> a duration larger than end_pts. During edit list processing, the
> delay is removed again, yielding the correct duration within the elst.
>
> However, other duration-carrying atoms (tkhd, mvhd, mdhd) still have
> the delay incorporated into their durations. This is incorrect.
>
> Fix this by withholding delay from the duration if edit lists are used.
> This also simplifies edit-list processing a bit, since the delay
> does not need to be removed from the calculated duration again.
> ---
>
>  The mov spec says that the tkhd duration is "derived from the track's
>  edits" [1] and the duratons of the other atoms (mvhd, mdhd) are in turn
>  taken from the longest track. So it seems that incorporating the delay
>  into the track duration is a bug in itself when the edit list has the
>  correct duration, and this propagates out tothe other top-level durations.
>
>  Unsure of how this change interacts with other modes that may expect
>  negative timestamps such as CMAF, so the patch errs on the side of
>  caution and only takes effect if edit lists are used. Can loosen that
>  up if necessary.
>
>  [1] 
https://developer.apple.com/library/archive/documentation/QuickTime/QTFF/QTFFChap2/qtff2.html#//apple_ref/doc/uid/TP4939-CH204-BBCEIDFA
>
> libavformat/movenc.c | 13 -
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/libavformat/movenc.c b/libavformat/movenc.c
> index 7db2e28840..31441a9f6c 100644
> --- a/libavformat/movenc.c
> +++ b/libavformat/movenc.c
> @@ -2831,7 +2831,14 @@ static int64_t calc_pts_duration(MOVMuxContext *mov, 
MOVTrack *track)
> if (track->end_pts != AV_NOPTS_VALUE &&
> track->start_dts != AV_NOPTS_VALUE &&
> track->start_cts != AV_NOPTS_VALUE) {
> -return track->end_pts - (track->start_dts + track->start_cts);
> +int64_t dur = track->end_pts, delay = track->start_dts + 
track->start_cts;
> +/* Note, this delay is calculated from the pts of the first sample,
> + * ensuring that we don't reduce the duration for cases with
> + * dts<0 pts=0. */

If you have a stream starting with dts<0 pts=0, you'll have start_pts =
start_dts + start_cts = 0. That gives delay=0 after your modification. But
the comment says "don't reduce the duration for cases with pts=0" - where
the delay variable would be zero anyway?



I'm not quite sure what you mean - that the comment is outdated?
Or that this modification would perhaps not behave as expected?


Yeah, the comment seems wrong here - it looks like it's been moved along 
with the code, but it doesn't really make sense here and/or for the case 
you're describing, I think.



For what it's worth, the cases I'm concerned with have start_pts < 0.





I don't manage to follow the reasoning and explanation in the commit
message. To be able to concretely reason about this issue at all, we need
to look at a concrete example. Can you provide a sample input file and a
reproducible command, and point out which exact field in the muxer output
of that case that you consider wrong?



Had to create a trac to find somewhere to host the sample. Tried to put
some details there but the formatting seems messed up and I can't figure
out how to edit, apologies. So here is some more info -

Input sample:

https://trac.ffmpeg.org/raw-attachment/ticket/9028/test-timecode.mp4

Run the following for a transmuxed clip from 3s for a 5s duration:

ffmpeg -ss 3 -i test-timecode.mp4 -t 5 -c copy out.mp4

Note that the actual cut location is mid-GOP, so there's a 1s pts delay
at the beginning of the output file with negative pts.

ffprobe shows:

ffprobe -show_streams -show_format out.mp4 2>&1 | grep duration=

duration=5.166992 # stream duration - correct
duration=6.167000 # format duration - incorrect

mp4dump'ing out.mp4 gives this:

# incorrect: duration should be sum of elst durations
 [tkhd] size=12+80, flags=3
 duration = 6167


Thanks, I've reproduced this. I'll look closer into it and the suggested 
patch and/or other ways of solving it, soon, but please bear with me, I*m 
a bit swamped...


// Martin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v2] In order to fine-control referencing schemes in VP9 encoding, there is a need to use VP9E_SET_SVC_REF_FRAME_CONFIG method. This commit provides a way to use the API throu

2020-12-14 Thread James Zern
Hi,


On Mon, Dec 14, 2020 at 11:54 AM Wonkap Jang  wrote:
>
>
>
> On Mon, Dec 7, 2020 at 11:57 PM Wonkap Jang  wrote:
>>
>> ---
>>  doc/encoders.texi  | 32 +
>>  libavcodec/libvpxenc.c | 79 ++
>>  2 files changed, 111 insertions(+)
>>

Sorry I missed this when scanning the subjects. Please update the
commit message so it has a short 1 one line subject [1].

[1] 
https://git-scm.com/book/en/v2/Distributed-Git-Contributing-to-a-Project#_commit_guidelines

>> diff --git a/doc/encoders.texi b/doc/encoders.texi
>> index 0b1c69e982..aa3a2221b6 100644
>> --- a/doc/encoders.texi
>> +++ b/doc/encoders.texi
>> @@ -2129,6 +2129,38 @@ midpoint is passed in rather than calculated for a 
>> specific clip or chunk.
>>  The valid range is [0, 1]. 0 (default) uses standard VBR.
>>  @item enable-tpl @var{boolean}
>>  Enable temporal dependency model.
>> +@item ref-frame-config
>> +Using per-frame metadata, set members of the structure 
>> @code{vpx_svc_ref_frame_config_t} in @code{vpx/vp8cx.h} to fine-control 
>> referencing schemes and frame buffer management.
>> +@*Use a :-separated list of key=value pairs.
>> +For example,
>> +@example
>> +av_dict_set(&av_frame->metadata, "ref-frame-config", \
>> +"rfc_update_buffer_slot=7:rfc_lst_fb_idx=0:rfc_gld_fb_idx=1:rfc_alt_fb_idx=2:rfc_reference_last=0:rfc_reference_golden=0:rfc_reference_alt_ref=0");}
>> +@end example
>> +@table @option
>> +@item rfc_update_buffer_slot
>> +Indicates the buffer slot number to update
>> +@item rfc_update_last
>> +Indicates whether to update the LAST frame
>> +@item rfc_update_golden
>> +Indicates whether to update GOLDEN frame
>> +@item rfc_update_alt_ref
>> +Indicates whether to update ALT_REF frame
>> +@item rfc_lst_fb_idx
>> +LAST frame buffer index
>> +@item rfc_gld_fb_idx
>> +GOLDEN frame buffer index
>> +@item rfc_alt_fb_idx
>> +ALT_REF frame buffer index
>> +@item rfc_reference_last
>> +Indicates whetehr to reference LAST frame
>> +@item rfc_reference_golden
>> +Indicates whether to reference GOLDEN frame
>> +@item rfc_reference_alt_ref
>> +Indicates whether to reference ALT_REF frame
>> +@item rfc_reference_duration
>> +Indicates frame duration
>> +@end table
>>  @end table
>>
>>  @end table
>> diff --git a/libavcodec/libvpxenc.c b/libavcodec/libvpxenc.c
>> index a7c76eb835..345c71cd22 100644
>> --- a/libavcodec/libvpxenc.c
>> +++ b/libavcodec/libvpxenc.c
>> @@ -125,6 +125,11 @@ typedef struct VPxEncoderContext {
>>   * encounter a frame with ROI side data.
>>   */
>>  int roi_warned;
>> +
>> +#if CONFIG_LIBVPX_VP9_ENCODER && defined 
>> (VPX_CTRL_VP9E_SET_MAX_INTER_BITRATE_PCT)

It would be better to just check the ABI version in this case since
it's a little confusing to use an unrelated control in the check.

>> +vpx_svc_ref_frame_config_t ref_frame_config;
>> +AVDictionary *vpx_ref_frame_config;
>> +#endif
>>  } VPxContext;
>>
>>  /** String mappings for enum vp8e_enc_control_id */
>> @@ -152,6 +157,9 @@ static const char *const ctlidstr[] = {
>>  [VP9E_SET_SVC_LAYER_ID]= "VP9E_SET_SVC_LAYER_ID",
>>  #if VPX_ENCODER_ABI_VERSION >= 12
>>  [VP9E_SET_SVC_PARAMETERS]  = "VP9E_SET_SVC_PARAMETERS",
>> +#if defined (VPX_CTRL_VP9E_SET_MAX_INTER_BITRATE_PCT)
>> +[VP9E_SET_SVC_REF_FRAME_CONFIG]= "VP9E_SET_SVC_REF_FRAME_CONFIG",
>> +#endif
>>  #endif
>>  [VP9E_SET_SVC] = "VP9E_SET_SVC",
>>  #if VPX_ENCODER_ABI_VERSION >= 11
>> @@ -394,6 +402,18 @@ static void vp8_ts_parse_int_array(int *dest, char 
>> *value, size_t value_len, int
>>  }
>>  }
>>
>> +static void vp8_ts_parse_int64_array(int64_t *dest, char *value, size_t 
>> value_len, int max_entries)
>> +{
>> +int dest_idx = 0;
>> +char *saveptr = NULL;
>> +char *token = av_strtok(value, ",", &saveptr);
>> +
>> +while (token && dest_idx < max_entries) {
>> +dest[dest_idx++] = strtoull(token, NULL, 10);
>> +token = av_strtok(NULL, ",", &saveptr);
>> +}
>> +}
>> +
>>  static void set_temporal_layer_pattern(int layering_mode, 
>> vpx_codec_enc_cfg_t *cfg,
>> int *layer_flags, int 
>> *flag_periodicity)
>>  {
>> @@ -541,6 +561,48 @@ static int vpx_ts_param_parse(VPxContext *ctx, struct 
>> vpx_codec_enc_cfg *enccfg,
>>  return 0;
>>  }
>>
>> +#if CONFIG_LIBVPX_VP9_ENCODER && defined 
>> (VPX_CTRL_VP9E_SET_MAX_INTER_BITRATE_PCT)
>> +static int vpx_ref_frame_config_parse(VPxContext *ctx, const struct 
>> vpx_codec_enc_cfg *enccfg,
>> +  char *key, char *value, enum AVCodecID 
>> codec_id)
>> +{
>> +size_t value_len = strlen(value);
>> +int ss_number_layers = enccfg->ss_number_layers;
>> +vpx_svc_ref_frame_config_t *ref_frame_config = &ctx->ref_frame_config;
>> +
>> +if (!value_len)
>> +return -1;
>> +
>> +if (codec_id != AV_CODEC_ID_VP9)
>> +return -1;
>> +
>> +if (!strcmp(key, "rfc_update_buffer_slot")) {
>> 

Re: [FFmpeg-devel] [PATCH 0/3] add vvc raw demuxer

2020-12-14 Thread Mark Thompson

On 14/12/2020 13:31, Nuo Mi wrote:

Hi Mark,
I have almost done the cbs for sps, pps, and slice header. I will start to
implement the parser.


This looks fun :)


Few questions for you:
1. We need over-read some nals to detect the frame boundaries. But those
nals may switch/replace sps/pps. Do we need to use an output cbs and get
frames from the output cbs?


I'm not seeing where this can happen - if you see a new parameter set then you 
must be in a new AU so you don't parse it, while a new VCL NAL as part of a new 
frame with no PS before it must have a parsable header?  (Or am I missing some 
problematic case?)


2. We can't handle an incompleted nal in current cbs.
https://github.com/FFmpeg/FFmpeg/blob/03c8fe49ea3f2a2444607e541dff15a1ccd7f0c2/libavcodec/h2645_parse.c#L437,
do we have a plan to fix it? What's your suggstion for the frame split?


I'm unsure what the question is.  You will always need to keep reading until 
you split a valid NAL unit which isn't in the current AU (noting that a slice 
is the last slice in the current frame is never sufficient, because suffixes 
might follow).


3. How to well test the cbs?


Passthrough is the best initial test, by making an h266_metadata bsf (even if 
it has no options).  The existing test coverage in FATE of CBS is primarily 
driven by this - for H.26[45], the input streams there are a subset of the 
H.26[45].1 conformance test streams chosen to cover as much of the header space 
as possible.

I recommend enabling the write support ASAP (by adding the second include of 
cbs_h266_syntax_template.c), because the double build can shake out other 
problems too.

Thanks,

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v2] In order to fine-control referencing schemes in VP9 encoding, there is a need to use VP9E_SET_SVC_REF_FRAME_CONFIG method. This commit provides a way to use the API throu

2020-12-14 Thread Wonkap Jang
On Mon, Dec 7, 2020 at 11:57 PM Wonkap Jang  wrote:

> ---
>  doc/encoders.texi  | 32 +
>  libavcodec/libvpxenc.c | 79 ++
>  2 files changed, 111 insertions(+)
>
> diff --git a/doc/encoders.texi b/doc/encoders.texi
> index 0b1c69e982..aa3a2221b6 100644
> --- a/doc/encoders.texi
> +++ b/doc/encoders.texi
> @@ -2129,6 +2129,38 @@ midpoint is passed in rather than calculated for a
> specific clip or chunk.
>  The valid range is [0, 1]. 0 (default) uses standard VBR.
>  @item enable-tpl @var{boolean}
>  Enable temporal dependency model.
> +@item ref-frame-config
> +Using per-frame metadata, set members of the structure
> @code{vpx_svc_ref_frame_config_t} in @code{vpx/vp8cx.h} to fine-control
> referencing schemes and frame buffer management.
> +@*Use a :-separated list of key=value pairs.
> +For example,
> +@example
> +av_dict_set(&av_frame->metadata, "ref-frame-config", \
>
> +"rfc_update_buffer_slot=7:rfc_lst_fb_idx=0:rfc_gld_fb_idx=1:rfc_alt_fb_idx=2:rfc_reference_last=0:rfc_reference_golden=0:rfc_reference_alt_ref=0");}
> +@end example
> +@table @option
> +@item rfc_update_buffer_slot
> +Indicates the buffer slot number to update
> +@item rfc_update_last
> +Indicates whether to update the LAST frame
> +@item rfc_update_golden
> +Indicates whether to update GOLDEN frame
> +@item rfc_update_alt_ref
> +Indicates whether to update ALT_REF frame
> +@item rfc_lst_fb_idx
> +LAST frame buffer index
> +@item rfc_gld_fb_idx
> +GOLDEN frame buffer index
> +@item rfc_alt_fb_idx
> +ALT_REF frame buffer index
> +@item rfc_reference_last
> +Indicates whetehr to reference LAST frame
> +@item rfc_reference_golden
> +Indicates whether to reference GOLDEN frame
> +@item rfc_reference_alt_ref
> +Indicates whether to reference ALT_REF frame
> +@item rfc_reference_duration
> +Indicates frame duration
> +@end table
>  @end table
>
>  @end table
> diff --git a/libavcodec/libvpxenc.c b/libavcodec/libvpxenc.c
> index a7c76eb835..345c71cd22 100644
> --- a/libavcodec/libvpxenc.c
> +++ b/libavcodec/libvpxenc.c
> @@ -125,6 +125,11 @@ typedef struct VPxEncoderContext {
>   * encounter a frame with ROI side data.
>   */
>  int roi_warned;
> +
> +#if CONFIG_LIBVPX_VP9_ENCODER && defined
> (VPX_CTRL_VP9E_SET_MAX_INTER_BITRATE_PCT)
> +vpx_svc_ref_frame_config_t ref_frame_config;
> +AVDictionary *vpx_ref_frame_config;
> +#endif
>  } VPxContext;
>
>  /** String mappings for enum vp8e_enc_control_id */
> @@ -152,6 +157,9 @@ static const char *const ctlidstr[] = {
>  [VP9E_SET_SVC_LAYER_ID]= "VP9E_SET_SVC_LAYER_ID",
>  #if VPX_ENCODER_ABI_VERSION >= 12
>  [VP9E_SET_SVC_PARAMETERS]  = "VP9E_SET_SVC_PARAMETERS",
> +#if defined (VPX_CTRL_VP9E_SET_MAX_INTER_BITRATE_PCT)
> +[VP9E_SET_SVC_REF_FRAME_CONFIG]= "VP9E_SET_SVC_REF_FRAME_CONFIG",
> +#endif
>  #endif
>  [VP9E_SET_SVC] = "VP9E_SET_SVC",
>  #if VPX_ENCODER_ABI_VERSION >= 11
> @@ -394,6 +402,18 @@ static void vp8_ts_parse_int_array(int *dest, char
> *value, size_t value_len, int
>  }
>  }
>
> +static void vp8_ts_parse_int64_array(int64_t *dest, char *value, size_t
> value_len, int max_entries)
> +{
> +int dest_idx = 0;
> +char *saveptr = NULL;
> +char *token = av_strtok(value, ",", &saveptr);
> +
> +while (token && dest_idx < max_entries) {
> +dest[dest_idx++] = strtoull(token, NULL, 10);
> +token = av_strtok(NULL, ",", &saveptr);
> +}
> +}
> +
>  static void set_temporal_layer_pattern(int layering_mode,
> vpx_codec_enc_cfg_t *cfg,
> int *layer_flags, int
> *flag_periodicity)
>  {
> @@ -541,6 +561,48 @@ static int vpx_ts_param_parse(VPxContext *ctx, struct
> vpx_codec_enc_cfg *enccfg,
>  return 0;
>  }
>
> +#if CONFIG_LIBVPX_VP9_ENCODER && defined
> (VPX_CTRL_VP9E_SET_MAX_INTER_BITRATE_PCT)
> +static int vpx_ref_frame_config_parse(VPxContext *ctx, const struct
> vpx_codec_enc_cfg *enccfg,
> +  char *key, char *value, enum AVCodecID
> codec_id)
> +{
> +size_t value_len = strlen(value);
> +int ss_number_layers = enccfg->ss_number_layers;
> +vpx_svc_ref_frame_config_t *ref_frame_config = &ctx->ref_frame_config;
> +
> +if (!value_len)
> +return -1;
> +
> +if (codec_id != AV_CODEC_ID_VP9)
> +return -1;
> +
> +if (!strcmp(key, "rfc_update_buffer_slot")) {
> +vp8_ts_parse_int_array(ref_frame_config->update_buffer_slot,
> value, value_len, ss_number_layers);
> +} else if (!strcmp(key, "rfc_update_last")) {
> +vp8_ts_parse_int_array(ref_frame_config->update_last, value,
> value_len, ss_number_layers);
> +} else if (!strcmp(key, "rfc_update_golden")) {
> +vp8_ts_parse_int_array(ref_frame_config->update_golden, value,
> value_len, ss_number_layers);
> +} else if (!strcmp(key, "rfc_update_alt_ref")) {
> +vp8_ts_parse_int_array(ref_frame_config->update_alt_ref, value,
> val

[FFmpeg-devel] [PATCH 8/9] lavfi/vf_codecview: convert to the video_enc_params API

2020-12-14 Thread Anton Khirnov
---
 libavfilter/Makefile   |  2 +-
 libavfilter/vf_codecview.c | 12 ++--
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 66adcea5f8..7a38a9e1b7 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -193,7 +193,7 @@ OBJS-$(CONFIG_CHROMAKEY_FILTER)  += 
vf_chromakey.o
 OBJS-$(CONFIG_CHROMANR_FILTER)   += vf_chromanr.o
 OBJS-$(CONFIG_CHROMASHIFT_FILTER)+= vf_chromashift.o
 OBJS-$(CONFIG_CIESCOPE_FILTER)   += vf_ciescope.o
-OBJS-$(CONFIG_CODECVIEW_FILTER)  += vf_codecview.o
+OBJS-$(CONFIG_CODECVIEW_FILTER)  += vf_codecview.o qp_table.o
 OBJS-$(CONFIG_COLORBALANCE_FILTER)   += vf_colorbalance.o
 OBJS-$(CONFIG_COLORCHANNELMIXER_FILTER)  += vf_colorchannelmixer.o
 OBJS-$(CONFIG_COLORKEY_FILTER)   += vf_colorkey.o
diff --git a/libavfilter/vf_codecview.c b/libavfilter/vf_codecview.c
index 331bfba777..197dc96136 100644
--- a/libavfilter/vf_codecview.c
+++ b/libavfilter/vf_codecview.c
@@ -33,6 +33,7 @@
 #include "libavutil/motion_vector.h"
 #include "libavutil/opt.h"
 #include "avfilter.h"
+#include "qp_table.h"
 #include "internal.h"
 
 #define MV_P_FOR  (1<<0)
@@ -219,8 +220,14 @@ static int filter_frame(AVFilterLink *inlink, AVFrame 
*frame)
 AVFilterLink *outlink = ctx->outputs[0];
 
 if (s->qp) {
-int qstride, qp_type;
-int8_t *qp_table = av_frame_get_qp_table(frame, &qstride, &qp_type);
+int qstride, qp_type, ret;
+int8_t *qp_table;
+
+ret = ff_qp_table_extract(frame, &qp_table, &qstride, NULL, &qp_type);
+if (ret < 0) {
+av_frame_free(&frame);
+return ret;
+}
 
 if (qp_table) {
 int x, y;
@@ -240,6 +247,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame 
*frame)
 pv += lzv;
 }
 }
+av_freep(&qp_table);
 }
 
 if (s->mv || s->mv_type) {
-- 
2.28.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 9/9] lavfi/vf_uspp: convert to the video_enc_params API

2020-12-14 Thread Anton Khirnov
---
 libavfilter/Makefile  |  2 +-
 libavfilter/vf_uspp.c | 53 ++-
 2 files changed, 23 insertions(+), 32 deletions(-)

diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 7a38a9e1b7..52d336c3e8 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -448,7 +448,7 @@ OBJS-$(CONFIG_UNSHARP_FILTER)+= vf_unsharp.o
 OBJS-$(CONFIG_UNSHARP_OPENCL_FILTER) += vf_unsharp_opencl.o opencl.o \
 opencl/unsharp.o
 OBJS-$(CONFIG_UNTILE_FILTER) += vf_untile.o
-OBJS-$(CONFIG_USPP_FILTER)   += vf_uspp.o
+OBJS-$(CONFIG_USPP_FILTER)   += vf_uspp.o qp_table.o
 OBJS-$(CONFIG_V360_FILTER)   += vf_v360.o
 OBJS-$(CONFIG_VAGUEDENOISER_FILTER)  += vf_vaguedenoiser.o
 OBJS-$(CONFIG_VECTORSCOPE_FILTER)+= vf_vectorscope.o
diff --git a/libavfilter/vf_uspp.c b/libavfilter/vf_uspp.c
index 6a814350e8..415949bc76 100644
--- a/libavfilter/vf_uspp.c
+++ b/libavfilter/vf_uspp.c
@@ -32,6 +32,7 @@
 #include "libavutil/opt.h"
 #include "libavutil/pixdesc.h"
 #include "internal.h"
+#include "qp_table.h"
 #include "avfilter.h"
 
 #define MAX_LEVEL 8 /* quality levels */
@@ -51,8 +52,8 @@ typedef struct USPPContext {
 AVCodecContext *avctx_enc[BLOCK*BLOCK];
 AVFrame *frame;
 AVFrame *frame_dec;
-uint8_t *non_b_qp_table;
-int non_b_qp_alloc_size;
+int8_t *non_b_qp_table;
+int non_b_qp_stride;
 int use_bframe_qp;
 } USPPContext;
 
@@ -385,45 +386,32 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 AVFrame *out = in;
 
 int qp_stride = 0;
-uint8_t *qp_table = NULL;
+int8_t *qp_table = NULL;
+int ret = 0;
 
 /* if we are not in a constant user quantizer mode and we don't want to use
  * the quantizers from the B-frames (B-frames often have a higher QP), we
  * need to save the qp table from the last non B-frame; this is what the
  * following code block does */
-if (!uspp->qp) {
-qp_table = av_frame_get_qp_table(in, &qp_stride, &uspp->qscale_type);
-
-if (qp_table && !uspp->use_bframe_qp && in->pict_type != 
AV_PICTURE_TYPE_B) {
-int w, h;
-
-/* if the qp stride is not set, it means the QP are only defined on
- * a line basis */
-if (!qp_stride) {
-w = AV_CEIL_RSHIFT(inlink->w, 4);
-h = 1;
-} else {
-w = qp_stride;
-h = AV_CEIL_RSHIFT(inlink->h, 4);
-}
-
-if (w * h > uspp->non_b_qp_alloc_size) {
-int ret = av_reallocp_array(&uspp->non_b_qp_table, w, h);
-if (ret < 0) {
-uspp->non_b_qp_alloc_size = 0;
-return ret;
-}
-uspp->non_b_qp_alloc_size = w * h;
-}
+if (!uspp->qp && (uspp->use_bframe_qp || in->pict_type != 
AV_PICTURE_TYPE_B)) {
+ret = ff_qp_table_extract(in, &qp_table, &qp_stride, NULL, 
&uspp->qscale_type);
+if (ret < 0) {
+av_frame_free(&in);
+return ret;
+}
 
-av_assert0(w * h <= uspp->non_b_qp_alloc_size);
-memcpy(uspp->non_b_qp_table, qp_table, w * h);
+if (!uspp->use_bframe_qp && in->pict_type != AV_PICTURE_TYPE_B) {
+av_freep(&uspp->non_b_qp_table);
+uspp->non_b_qp_table  = qp_table;
+uspp->non_b_qp_stride = qp_stride;
 }
 }
 
 if (uspp->log2_count && !ctx->is_disabled) {
-if (!uspp->use_bframe_qp && uspp->non_b_qp_table)
+if (!uspp->use_bframe_qp && uspp->non_b_qp_table) {
 qp_table = uspp->non_b_qp_table;
+qp_stride = uspp->non_b_qp_stride;
+}
 
 if (qp_table || uspp->qp) {
 
@@ -455,7 +443,10 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 inlink->w, inlink->h);
 av_frame_free(&in);
 }
-return ff_filter_frame(outlink, out);
+ret = ff_filter_frame(outlink, out);
+if (qp_table != uspp->non_b_qp_table)
+av_freep(&qp_table);
+return ret;
 }
 
 static av_cold void uninit(AVFilterContext *ctx)
-- 
2.28.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 5/9] lavfi/vf_spp: convert to the video_enc_params API

2020-12-14 Thread Anton Khirnov
Re-enable fate-filter-spp
---
 libavfilter/Makefile|  2 +-
 libavfilter/vf_spp.c| 55 -
 libavfilter/vf_spp.h|  2 +-
 tests/fate/filter-video.mak |  4 +--
 4 files changed, 28 insertions(+), 35 deletions(-)

diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 936e9bcd5b..9c8d0acaee 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -413,7 +413,7 @@ OBJS-$(CONFIG_SOBEL_FILTER)  += 
vf_convolution.o
 OBJS-$(CONFIG_SOBEL_OPENCL_FILTER)   += vf_convolution_opencl.o 
opencl.o \
 opencl/convolution.o
 OBJS-$(CONFIG_SPLIT_FILTER)  += split.o
-OBJS-$(CONFIG_SPP_FILTER)+= vf_spp.o
+OBJS-$(CONFIG_SPP_FILTER)+= vf_spp.o qp_table.o
 OBJS-$(CONFIG_SR_FILTER) += vf_sr.o
 OBJS-$(CONFIG_SSIM_FILTER)   += vf_ssim.o framesync.o
 OBJS-$(CONFIG_STEREO3D_FILTER)   += vf_stereo3d.o
diff --git a/libavfilter/vf_spp.c b/libavfilter/vf_spp.c
index 4bcc6429e0..4ac86b7419 100644
--- a/libavfilter/vf_spp.c
+++ b/libavfilter/vf_spp.c
@@ -36,6 +36,7 @@
 #include "libavutil/opt.h"
 #include "libavutil/pixdesc.h"
 #include "internal.h"
+#include "qp_table.h"
 #include "vf_spp.h"
 
 enum mode {
@@ -374,47 +375,34 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 AVFilterLink *outlink = ctx->outputs[0];
 AVFrame *out = in;
 int qp_stride = 0;
-const int8_t *qp_table = NULL;
+int8_t *qp_table = NULL;
 const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format);
 const int depth = desc->comp[0].depth;
+int ret = 0;
 
 /* if we are not in a constant user quantizer mode and we don't want to use
  * the quantizers from the B-frames (B-frames often have a higher QP), we
  * need to save the qp table from the last non B-frame; this is what the
  * following code block does */
-if (!s->qp) {
-qp_table = av_frame_get_qp_table(in, &qp_stride, &s->qscale_type);
-
-if (qp_table && !s->use_bframe_qp && in->pict_type != 
AV_PICTURE_TYPE_B) {
-int w, h;
-
-/* if the qp stride is not set, it means the QP are only defined on
- * a line basis */
-if (!qp_stride) {
-w = AV_CEIL_RSHIFT(inlink->w, 4);
-h = 1;
-} else {
-w = qp_stride;
-h = AV_CEIL_RSHIFT(inlink->h, 4);
-}
-
-if (w * h > s->non_b_qp_alloc_size) {
-int ret = av_reallocp_array(&s->non_b_qp_table, w, h);
-if (ret < 0) {
-s->non_b_qp_alloc_size = 0;
-return ret;
-}
-s->non_b_qp_alloc_size = w * h;
-}
+if (!s->qp && (s->use_bframe_qp || in->pict_type != AV_PICTURE_TYPE_B)) {
+ret = ff_qp_table_extract(in, &qp_table, &qp_stride, NULL, 
&s->qscale_type);
+if (ret < 0) {
+av_frame_free(&in);
+return ret;
+}
 
-av_assert0(w * h <= s->non_b_qp_alloc_size);
-memcpy(s->non_b_qp_table, qp_table, w * h);
+if (!s->use_bframe_qp && in->pict_type != AV_PICTURE_TYPE_B) {
+av_freep(&s->non_b_qp_table);
+s->non_b_qp_table  = qp_table;
+s->non_b_qp_stride = qp_stride;
 }
 }
 
 if (s->log2_count && !ctx->is_disabled) {
-if (!s->use_bframe_qp && s->non_b_qp_table)
-qp_table = s->non_b_qp_table;
+if (!s->use_bframe_qp && s->non_b_qp_table) {
+qp_table  = s->non_b_qp_table;
+qp_stride = s->non_b_qp_stride;
+}
 
 if (qp_table || s->qp) {
 const int cw = AV_CEIL_RSHIFT(inlink->w, s->hsub);
@@ -429,7 +417,8 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 out = ff_get_video_buffer(outlink, aligned_w, aligned_h);
 if (!out) {
 av_frame_free(&in);
-return AVERROR(ENOMEM);
+ret = AVERROR(ENOMEM);
+goto finish;
 }
 av_frame_copy_props(out, in);
 out->width  = in->width;
@@ -453,7 +442,11 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 inlink->w, inlink->h);
 av_frame_free(&in);
 }
-return ff_filter_frame(outlink, out);
+ret = ff_filter_frame(outlink, out);
+finish:
+if (qp_table != s->non_b_qp_table)
+av_freep(&qp_table);
+return ret;
 }
 
 static int process_command(AVFilterContext *ctx, const char *cmd, const char 
*args,
diff --git a/libavfilter/vf_spp.h b/libavfilter/vf_spp.h
index 879ed40f03..76c0c34ab2 100644
--- a/libavfilter/vf_spp.h
+++ b/libavfilter/vf_spp.h
@@ -39,7 +39,7 @@ typedef struct SPPContext {
 uint16_t *temp;
 AVDCT *dct;
 int8_t

[FFmpeg-devel] [PATCH 6/9] lavfi/vf_pp7: convert to the video_enc_params API

2020-12-14 Thread Anton Khirnov
Re-enable fate-filter-pp7
---
 libavfilter/Makefile|  2 +-
 libavfilter/vf_pp7.c| 14 +++---
 tests/fate/filter-video.mak |  4 ++--
 3 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 9c8d0acaee..9936fc9711 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -353,7 +353,7 @@ OBJS-$(CONFIG_PHOTOSENSITIVITY_FILTER)   += 
vf_photosensitivity.o
 OBJS-$(CONFIG_PIXDESCTEST_FILTER)+= vf_pixdesctest.o
 OBJS-$(CONFIG_PIXSCOPE_FILTER)   += vf_datascope.o
 OBJS-$(CONFIG_PP_FILTER) += vf_pp.o qp_table.o
-OBJS-$(CONFIG_PP7_FILTER)+= vf_pp7.o
+OBJS-$(CONFIG_PP7_FILTER)+= vf_pp7.o qp_table.o
 OBJS-$(CONFIG_PREMULTIPLY_FILTER)+= vf_premultiply.o framesync.o
 OBJS-$(CONFIG_PREWITT_FILTER)+= vf_convolution.o
 OBJS-$(CONFIG_PREWITT_OPENCL_FILTER) += vf_convolution_opencl.o 
opencl.o \
diff --git a/libavfilter/vf_pp7.c b/libavfilter/vf_pp7.c
index 570a1c90b9..24e3b1dd41 100644
--- a/libavfilter/vf_pp7.c
+++ b/libavfilter/vf_pp7.c
@@ -32,6 +32,7 @@
 #include "libavutil/opt.h"
 #include "libavutil/pixdesc.h"
 #include "internal.h"
+#include "qp_table.h"
 #include "vf_pp7.h"
 
 enum mode {
@@ -322,10 +323,15 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 AVFrame *out = in;
 
 int qp_stride = 0;
-uint8_t *qp_table = NULL;
+int8_t *qp_table = NULL;
 
-if (!pp7->qp)
-qp_table = av_frame_get_qp_table(in, &qp_stride, &pp7->qscale_type);
+if (!pp7->qp) {
+int ret = ff_qp_table_extract(in, &qp_table, &qp_stride, NULL, 
&pp7->qscale_type);
+if (ret < 0) {
+av_frame_free(&in);
+return ret;
+}
+}
 
 if (!ctx->is_disabled) {
 const int cw = AV_CEIL_RSHIFT(inlink->w, pp7->hsub);
@@ -340,6 +346,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 out = ff_get_video_buffer(outlink, aligned_w, aligned_h);
 if (!out) {
 av_frame_free(&in);
+av_freep(&qp_table);
 return AVERROR(ENOMEM);
 }
 av_frame_copy_props(out, in);
@@ -366,6 +373,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 inlink->w, inlink->h);
 av_frame_free(&in);
 }
+av_freep(&qp_table);
 return ff_filter_frame(outlink, out);
 }
 
diff --git a/tests/fate/filter-video.mak b/tests/fate/filter-video.mak
index 7f6b2d0f15..9c7e3489f4 100644
--- a/tests/fate/filter-video.mak
+++ b/tests/fate/filter-video.mak
@@ -573,9 +573,9 @@ fate-filter-pp4: CMD = video_filter "pp=be/ci"
 fate-filter-pp5: CMD = video_filter "pp=md"
 fate-filter-pp6: CMD = video_filter "pp=be/fd"
 
-#FATE_FILTER_VSYNTH-$(CONFIG_PP7_FILTER) += fate-filter-pp7
+FATE_FILTER_VSYNTH-$(CONFIG_PP7_FILTER) += fate-filter-pp7
 fate-filter-pp7: fate-vsynth1-mpeg4-qprd
-fate-filter-pp7: CMD = framecrc -flags bitexact -idct simple -i 
$(TARGET_PATH)/tests/data/fate/vsynth1-mpeg4-qprd.avi -frames:v 5 -flags 
+bitexact -vf "pp7"
+fate-filter-pp7: CMD = framecrc -flags bitexact -export_side_data venc_params 
-idct simple -i $(TARGET_PATH)/tests/data/fate/vsynth1-mpeg4-qprd.avi -frames:v 
5 -flags +bitexact -vf "pp7"
 
 FATE_FILTER_VSYNTH-$(CONFIG_SPP_FILTER) += fate-filter-spp
 fate-filter-spp: fate-vsynth1-mpeg4-qprd
-- 
2.28.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 7/9] lavfi/vf_fspp: convert to the video_enc_params API

2020-12-14 Thread Anton Khirnov
---
 libavfilter/Makefile  |  2 +-
 libavfilter/vf_fspp.c | 59 +--
 libavfilter/vf_fspp.h |  4 +--
 3 files changed, 26 insertions(+), 39 deletions(-)

diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 9936fc9711..66adcea5f8 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -271,7 +271,7 @@ OBJS-$(CONFIG_FRAMESTEP_FILTER)  += 
vf_framestep.o
 OBJS-$(CONFIG_FREEZEDETECT_FILTER)   += vf_freezedetect.o
 OBJS-$(CONFIG_FREEZEFRAMES_FILTER)   += vf_freezeframes.o
 OBJS-$(CONFIG_FREI0R_FILTER) += vf_frei0r.o
-OBJS-$(CONFIG_FSPP_FILTER)   += vf_fspp.o
+OBJS-$(CONFIG_FSPP_FILTER)   += vf_fspp.o qp_table.o
 OBJS-$(CONFIG_GBLUR_FILTER)  += vf_gblur.o
 OBJS-$(CONFIG_GEQ_FILTER)+= vf_geq.o
 OBJS-$(CONFIG_GRADFUN_FILTER)+= vf_gradfun.o
diff --git a/libavfilter/vf_fspp.c b/libavfilter/vf_fspp.c
index c6989046c4..a10d9b8a7d 100644
--- a/libavfilter/vf_fspp.c
+++ b/libavfilter/vf_fspp.c
@@ -40,6 +40,7 @@
 #include "libavutil/opt.h"
 #include "libavutil/pixdesc.h"
 #include "internal.h"
+#include "qp_table.h"
 #include "vf_fspp.h"
 
 #define OFFSET(x) offsetof(FSPPContext, x)
@@ -525,13 +526,6 @@ static int config_input(AVFilterLink *inlink)
 if (!fspp->temp || !fspp->src)
 return AVERROR(ENOMEM);
 
-if (!fspp->use_bframe_qp && !fspp->qp) {
-fspp->non_b_qp_alloc_size = AV_CEIL_RSHIFT(inlink->w, 4) * 
AV_CEIL_RSHIFT(inlink->h, 4);
-fspp->non_b_qp_table = av_calloc(fspp->non_b_qp_alloc_size, 
sizeof(*fspp->non_b_qp_table));
-if (!fspp->non_b_qp_table)
-return AVERROR(ENOMEM);
-}
-
 fspp->store_slice  = store_slice_c;
 fspp->store_slice2 = store_slice2_c;
 fspp->mul_thrmat   = mul_thrmat_c;
@@ -553,8 +547,9 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 AVFrame *out = in;
 
 int qp_stride = 0;
-uint8_t *qp_table = NULL;
+int8_t *qp_table = NULL;
 int i, bias;
+int ret = 0;
 int custom_threshold_m[64];
 
 bias = (1 << 4) + fspp->strength;
@@ -581,38 +576,25 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
  * the quantizers from the B-frames (B-frames often have a higher QP), we
  * need to save the qp table from the last non B-frame; this is what the
  * following code block does */
-if (!fspp->qp) {
-qp_table = av_frame_get_qp_table(in, &qp_stride, &fspp->qscale_type);
-
-if (qp_table && !fspp->use_bframe_qp && in->pict_type != 
AV_PICTURE_TYPE_B) {
-int w, h;
-
-/* if the qp stride is not set, it means the QP are only defined on
- * a line basis */
-   if (!qp_stride) {
-w = AV_CEIL_RSHIFT(inlink->w, 4);
-h = 1;
-} else {
-w = qp_stride;
-h = AV_CEIL_RSHIFT(inlink->h, 4);
-}
-if (w * h > fspp->non_b_qp_alloc_size) {
-int ret = av_reallocp_array(&fspp->non_b_qp_table, w, h);
-if (ret < 0) {
-fspp->non_b_qp_alloc_size = 0;
-return ret;
-}
-fspp->non_b_qp_alloc_size = w * h;
-}
+if (!fspp->qp && (fspp->use_bframe_qp || in->pict_type != 
AV_PICTURE_TYPE_B)) {
+ret = ff_qp_table_extract(in, &qp_table, &qp_stride, NULL, 
&fspp->qscale_type);
+if (ret < 0) {
+av_frame_free(&in);
+return ret;
+}
 
-av_assert0(w * h <= fspp->non_b_qp_alloc_size);
-memcpy(fspp->non_b_qp_table, qp_table, w * h);
+if (!fspp->use_bframe_qp && in->pict_type != AV_PICTURE_TYPE_B) {
+av_freep(&fspp->non_b_qp_table);
+fspp->non_b_qp_table  = qp_table;
+fspp->non_b_qp_stride = qp_stride;
 }
 }
 
 if (fspp->log2_count && !ctx->is_disabled) {
-if (!fspp->use_bframe_qp && fspp->non_b_qp_table)
+if (!fspp->use_bframe_qp && fspp->non_b_qp_table) {
 qp_table = fspp->non_b_qp_table;
+qp_stride = fspp->non_b_qp_stride;
+}
 
 if (qp_table || fspp->qp) {
 const int cw = AV_CEIL_RSHIFT(inlink->w, fspp->hsub);
@@ -627,7 +609,8 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 out = ff_get_video_buffer(outlink, aligned_w, aligned_h);
 if (!out) {
 av_frame_free(&in);
-return AVERROR(ENOMEM);
+ret = AVERROR(ENOMEM);
+goto finish;
 }
 av_frame_copy_props(out, in);
 out->width = in->width;
@@ -651,7 +634,11 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 inlink->w, inlink->h);
 av_frame_free(&in);
 }
-return ff_filter_frame(outlink, out);

[FFmpeg-devel] [PATCH 4/9] lavfi/vf_pp: convert to the video_enc_params API

2020-12-14 Thread Anton Khirnov
Re-enable fate-filter-qp and fate-filter-pp.
---
 libavfilter/Makefile|  2 +-
 libavfilter/vf_pp.c | 19 +++
 tests/fate/filter-video.mak |  6 +++---
 3 files changed, 19 insertions(+), 8 deletions(-)

diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 77b5d3aa23..936e9bcd5b 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -352,7 +352,7 @@ OBJS-$(CONFIG_PHASE_FILTER)  += vf_phase.o
 OBJS-$(CONFIG_PHOTOSENSITIVITY_FILTER)   += vf_photosensitivity.o
 OBJS-$(CONFIG_PIXDESCTEST_FILTER)+= vf_pixdesctest.o
 OBJS-$(CONFIG_PIXSCOPE_FILTER)   += vf_datascope.o
-OBJS-$(CONFIG_PP_FILTER) += vf_pp.o
+OBJS-$(CONFIG_PP_FILTER) += vf_pp.o qp_table.o
 OBJS-$(CONFIG_PP7_FILTER)+= vf_pp7.o
 OBJS-$(CONFIG_PREMULTIPLY_FILTER)+= vf_premultiply.o framesync.o
 OBJS-$(CONFIG_PREWITT_FILTER)+= vf_convolution.o
diff --git a/libavfilter/vf_pp.c b/libavfilter/vf_pp.c
index 524ef1bb0a..29ab777e01 100644
--- a/libavfilter/vf_pp.c
+++ b/libavfilter/vf_pp.c
@@ -26,7 +26,9 @@
 
 #include "libavutil/avassert.h"
 #include "libavutil/opt.h"
+
 #include "internal.h"
+#include "qp_table.h"
 
 #include "libpostproc/postprocess.h"
 
@@ -126,8 +128,9 @@ static int pp_filter_frame(AVFilterLink *inlink, AVFrame 
*inbuf)
 const int aligned_w = FFALIGN(outlink->w, 8);
 const int aligned_h = FFALIGN(outlink->h, 8);
 AVFrame *outbuf;
-int qstride, qp_type;
-int8_t *qp_table ;
+int qstride = 0;
+int8_t *qp_table = NULL;
+int ret;
 
 outbuf = ff_get_video_buffer(outlink, aligned_w, aligned_h);
 if (!outbuf) {
@@ -137,7 +140,14 @@ static int pp_filter_frame(AVFilterLink *inlink, AVFrame 
*inbuf)
 av_frame_copy_props(outbuf, inbuf);
 outbuf->width  = inbuf->width;
 outbuf->height = inbuf->height;
-qp_table = av_frame_get_qp_table(inbuf, &qstride, &qp_type);
+
+ret = ff_qp_table_extract(inbuf, &qp_table, &qstride, NULL, NULL);
+if (ret < 0) {
+av_frame_free(&inbuf);
+av_frame_free(&outbuf);
+av_freep(&qp_table);
+return ret;
+}
 
 pp_postprocess((const uint8_t **)inbuf->data, inbuf->linesize,
outbuf->data, outbuf->linesize,
@@ -146,9 +156,10 @@ static int pp_filter_frame(AVFilterLink *inlink, AVFrame 
*inbuf)
qstride,
pp->modes[pp->mode_id],
pp->pp_ctx,
-   outbuf->pict_type | (qp_type ? PP_PICT_TYPE_QP2 : 0));
+   outbuf->pict_type | (qp_table ? PP_PICT_TYPE_QP2 : 0));
 
 av_frame_free(&inbuf);
+av_freep(&qp_table);
 return ff_filter_frame(outlink, outbuf);
 }
 
diff --git a/tests/fate/filter-video.mak b/tests/fate/filter-video.mak
index 7f5c07fd24..9c48d65ef7 100644
--- a/tests/fate/filter-video.mak
+++ b/tests/fate/filter-video.mak
@@ -561,11 +561,11 @@ fate-filter-idet: CMD = framecrc -flags bitexact -idct 
simple -i $(SRC) -vf idet
 FATE_FILTER_VSYNTH-$(CONFIG_PAD_FILTER) += fate-filter-pad
 fate-filter-pad: CMD = video_filter "pad=iw*1.5:ih*1.5:iw*0.3:ih*0.2"
 
-#FATE_FILTER_PP = fate-filter-pp fate-filter-pp1 fate-filter-pp2 
fate-filter-pp3 fate-filter-pp4 fate-filter-pp5 fate-filter-pp6
+FATE_FILTER_PP = fate-filter-pp fate-filter-pp1 fate-filter-pp2 
fate-filter-pp3 fate-filter-pp4 fate-filter-pp5 fate-filter-pp6
 FATE_FILTER_VSYNTH-$(CONFIG_PP_FILTER) += $(FATE_FILTER_PP)
 $(FATE_FILTER_PP): fate-vsynth1-mpeg4-qprd
 
-fate-filter-pp:  CMD = framecrc -flags bitexact -idct simple -i 
$(TARGET_PATH)/tests/data/fate/vsynth1-mpeg4-qprd.avi -frames:v 5 -flags 
+bitexact -vf "pp=be/hb/vb/tn/l5/al"
+fate-filter-pp:  CMD = framecrc -flags bitexact -export_side_data venc_params 
-idct simple -i $(TARGET_PATH)/tests/data/fate/vsynth1-mpeg4-qprd.avi -frames:v 
5 -flags +bitexact -vf "pp=be/hb/vb/tn/l5/al"
 fate-filter-pp1: CMD = video_filter "pp=fq|4/be/hb/vb/tn/l5/al"
 fate-filter-pp2: CMD = video_filter "qp=2*(x+y),pp=be/h1/v1/lb"
 fate-filter-pp3: CMD = video_filter "qp=2*(x+y),pp=be/ha|128|7/va/li"
@@ -585,7 +585,7 @@ FATE_FILTER_VSYNTH-$(CONFIG_CODECVIEW_FILTER) += 
fate-filter-codecview
 fate-filter-codecview: fate-vsynth1-mpeg4-qprd
 fate-filter-codecview: CMD = framecrc -flags bitexact -idct simple -flags2 
+export_mvs -i $(TARGET_PATH)/tests/data/fate/vsynth1-mpeg4-qprd.avi -frames:v 
5 -flags +bitexact -vf codecview=mv=pf+bf+bb
 
-#FATE_FILTER_VSYNTH-$(call ALLYES, QP_FILTER PP_FILTER) += fate-filter-qp
+FATE_FILTER_VSYNTH-$(call ALLYES, QP_FILTER PP_FILTER) += fate-filter-qp
 fate-filter-qp: CMD = video_filter "qp=34,pp=be/hb/vb/tn/l5/al"
 
 FATE_FILTER_VSYNTH-$(CONFIG_SELECT_FILTER) += fate-filter-select
-- 
2.28.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ

[FFmpeg-devel] [PATCH 2/9] lavfi/vf_qp: convert to the video_enc_params API

2020-12-14 Thread Anton Khirnov
Temporarily disable fate-filter-qp until vf_pp is converted.
---
 libavfilter/vf_qp.c | 65 -
 tests/fate/filter-video.mak |  8 ++---
 2 files changed, 47 insertions(+), 26 deletions(-)

diff --git a/libavfilter/vf_qp.c b/libavfilter/vf_qp.c
index 33d39493bc..306e8e4594 100644
--- a/libavfilter/vf_qp.c
+++ b/libavfilter/vf_qp.c
@@ -23,6 +23,8 @@
 #include "libavutil/imgutils.h"
 #include "libavutil/pixdesc.h"
 #include "libavutil/opt.h"
+#include "libavutil/video_enc_params.h"
+
 #include "avfilter.h"
 #include "formats.h"
 #include "internal.h"
@@ -89,38 +91,59 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 AVFilterContext *ctx = inlink->dst;
 AVFilterLink *outlink = ctx->outputs[0];
 QPContext *s = ctx->priv;
-AVBufferRef *out_qp_table_buf;
 AVFrame *out = NULL;
-const int8_t *in_qp_table;
-int type, stride, ret;
+int ret;
+
+AVFrameSideData *sd_in;
+AVVideoEncParams *par_in = NULL;
+int8_t in_qp_global = 0;
+
+AVVideoEncParams *par_out;
 
 if (!s->qp_expr_str || ctx->is_disabled)
 return ff_filter_frame(outlink, in);
 
-out_qp_table_buf = av_buffer_alloc(s->h * s->qstride);
-if (!out_qp_table_buf) {
-ret = AVERROR(ENOMEM);
-goto fail;
+sd_in = av_frame_get_side_data(in, AV_FRAME_DATA_VIDEO_ENC_PARAMS);
+if (sd_in && sd_in->size >= sizeof(AVVideoEncParams)) {
+par_in = (AVVideoEncParams*)sd_in->data;
+
+// we accept the input QP table only if it is of the MPEG2 type
+// and contains either no blocks at all or 16x16 macroblocks
+if (par_in->type == AV_VIDEO_ENC_PARAMS_MPEG2 &&
+(par_in->nb_blocks == s->h * s->qstride || !par_in->nb_blocks)) {
+in_qp_global = par_in->qp;
+if (!par_in->nb_blocks)
+par_in = NULL;
+} else
+par_in = NULL;
 }
 
 out = av_frame_clone(in);
 if (!out) {
-av_buffer_unref(&out_qp_table_buf);
 ret = AVERROR(ENOMEM);
 goto fail;
 }
 
-in_qp_table = av_frame_get_qp_table(in, &stride, &type);
-av_frame_set_qp_table(out, out_qp_table_buf, s->qstride, type);
+par_out = av_video_enc_params_create_side_data(out, 
AV_VIDEO_ENC_PARAMS_MPEG2,
+   (s->evaluate_per_mb || 
sd_in) ?
+   s->h * s->qstride : 0);
+if (!par_out) {
+ret = AVERROR(ENOMEM);
+goto fail;
+}
 
+#define BLOCK_QP_DELTA(block_idx) \
+(par_in ? av_video_enc_params_block(par_in, block_idx)->delta_qp : 0)
 
 if (s->evaluate_per_mb) {
 int y, x;
 
 for (y = 0; y < s->h; y++)
 for (x = 0; x < s->qstride; x++) {
-int qp = in_qp_table ? in_qp_table[x + stride * y] : NAN;
-double var_values[] = { !!in_qp_table, qp, x, y, s->qstride, 
s->h, 0};
+unsigned int block_idx = y * s->qstride + x;
+AVVideoBlockParams *b = av_video_enc_params_block(par_out, 
block_idx);
+int qp = sd_in ? in_qp_global + BLOCK_QP_DELTA(block_idx) : 
NAN;
+double var_values[] = { !!sd_in, qp, x, y, s->qstride, s->h, 
0};
 static const char *var_names[] = { "known", "qp", "x", "y", 
"w", "h", NULL };
 double temp_val;
 
@@ -129,21 +152,19 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 NULL, NULL, NULL, NULL, 0, 0, ctx);
 if (ret < 0)
 goto fail;
-out_qp_table_buf->data[x + s->qstride * y] = lrintf(temp_val);
+b->delta_qp = lrintf(temp_val);
 }
-} else if (in_qp_table) {
+} else if (sd_in) {
 int y, x;
 
 for (y = 0; y < s->h; y++)
-for (x = 0; x < s->qstride; x++)
-out_qp_table_buf->data[x + s->qstride * y] = s->lut[129 +
-((int8_t)in_qp_table[x + stride * y])];
+for (x = 0; x < s->qstride; x++) {
+unsigned int block_idx = y * s->qstride + x;
+AVVideoBlockParams *b = av_video_enc_params_block(par_out, 
block_idx);
+b->delta_qp = s->lut[129 + (int8_t)(in_qp_global + 
BLOCK_QP_DELTA(block_idx))];
+}
 } else {
-int y, x, qp = s->lut[0];
-
-for (y = 0; y < s->h; y++)
-for (x = 0; x < s->qstride; x++)
-out_qp_table_buf->data[x + s->qstride * y] = qp;
+par_out->qp = s->lut[0];
 }
 
 ret = ff_filter_frame(outlink, out);
diff --git a/tests/fate/filter-video.mak b/tests/fate/filter-video.mak
index 34cfc38aba..7f5c07fd24 100644
--- a/tests/fate/filter-video.mak
+++ b/tests/fate/filter-video.mak
@@ -567,8 +567,8 @@ $(FATE_FILTER_PP): fate-vsynth1-mpeg4-qprd
 
 fate-filter-pp:  CMD = framecrc -flags bitexact -idct simple -i 
$(TARGET_PATH)/tests/data/fate/

[FFmpeg-devel] [PATCH 3/9] lavfi: add common code to handle QP tables

2020-12-14 Thread Anton Khirnov
It will be used for converting the *pp filters to the new
AVVideoEncParams API.
---
 libavfilter/qp_table.c | 72 ++
 libavfilter/qp_table.h | 33 +++
 2 files changed, 105 insertions(+)
 create mode 100644 libavfilter/qp_table.c
 create mode 100644 libavfilter/qp_table.h

diff --git a/libavfilter/qp_table.c b/libavfilter/qp_table.c
new file mode 100644
index 00..33812b708d
--- /dev/null
+++ b/libavfilter/qp_table.c
@@ -0,0 +1,72 @@
+/*
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with FFmpeg; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+#include 
+
+// for FF_QSCALE_TYPE_*
+#include "libavcodec/internal.h"
+
+#include "libavutil/frame.h"
+#include "libavutil/mem.h"
+#include "libavutil/video_enc_params.h"
+
+#include "qp_table.h"
+
+int ff_qp_table_extract(AVFrame *frame, int8_t **table, int *table_w, int 
*table_h,
+int *qscale_type)
+{
+AVFrameSideData *sd;
+AVVideoEncParams *par;
+unsigned int mb_h = (frame->height + 15) / 16;
+unsigned int mb_w = (frame->width + 15) / 16;
+unsigned int nb_mb = mb_h * mb_w;
+unsigned int block_idx;
+
+*table = NULL;
+
+sd = av_frame_get_side_data(frame, AV_FRAME_DATA_VIDEO_ENC_PARAMS);
+if (!sd)
+return 0;
+par = (AVVideoEncParams*)sd->data;
+if (par->type != AV_VIDEO_ENC_PARAMS_MPEG2 ||
+(par->nb_blocks != 0 && par->nb_blocks != nb_mb))
+return AVERROR(ENOSYS);
+
+*table = av_malloc(nb_mb);
+if (!*table)
+return AVERROR(ENOMEM);
+if (table_w)
+*table_w = mb_w;
+if (table_h)
+*table_h = mb_h;
+if (qscale_type)
+*qscale_type = FF_QSCALE_TYPE_MPEG2;
+
+if (par->nb_blocks == 0) {
+memset(*table, par->qp, nb_mb);
+return 0;
+}
+
+for (block_idx = 0; block_idx < nb_mb; block_idx++) {
+AVVideoBlockParams *b = av_video_enc_params_block(par, block_idx);
+(*table)[block_idx] = par->qp + b->delta_qp;
+}
+
+return 0;
+}
+
diff --git a/libavfilter/qp_table.h b/libavfilter/qp_table.h
new file mode 100644
index 00..a552fe2e64
--- /dev/null
+++ b/libavfilter/qp_table.h
@@ -0,0 +1,33 @@
+/*
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with FFmpeg; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+#ifndef AVFILTER_QP_TABLE_H
+#define AVFILTER_QP_TABLE_H
+
+#include 
+
+#include "libavutil/frame.h"
+
+/**
+ * Extract a libpostproc-compatible QP table - an 8-bit QP value per 16x16
+ * macroblock, stored in raster order - from AVVideoEncParams side data.
+ */
+int ff_qp_table_extract(AVFrame *frame, int8_t **table, int *table_w, int 
*table_h,
+int *qscale_type);
+
+#endif // AVFILTER_QP_TABLE_H
-- 
2.28.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH v2 1/9] mpegvideo: use the AVVideoEncParams API for exporting QP tables

2020-12-14 Thread Anton Khirnov
Do it only when requested with the AV_CODEC_EXPORT_DATA_VIDEO_ENC_PARAMS
flag.

Drop previous code using the long-deprecated AV_FRAME_DATA_QP_TABLE*
API. Temporarily disable fate-filter-pp, fate-filter-pp7,
fate-filter-spp. They will be reenabled once these filters are converted
in following commits.
---
Applied Michael's comments to the set.
---
 doc/APIchanges   |  3 +++
 libavcodec/h263dec.c |  2 ++
 libavcodec/mpeg12dec.c   |  1 +
 libavcodec/mpegpicture.c |  2 ++
 libavcodec/mpegpicture.h |  1 +
 libavcodec/mpegvideo.c   | 35 ---
 libavcodec/rv34.c|  1 +
 libavutil/video_enc_params.h |  8 
 tests/fate/filter-video.mak  |  6 +++---
 9 files changed, 49 insertions(+), 10 deletions(-)

diff --git a/doc/APIchanges b/doc/APIchanges
index 3fb9e12525..054cdd67b9 100644
--- a/doc/APIchanges
+++ b/doc/APIchanges
@@ -15,6 +15,9 @@ libavutil: 2017-10-21
 
 API changes, most recent first:
 
+2020-xx-xx - xx - lavu 56.62.100 - video_enc_params.h
+  Add AV_VIDEO_ENC_PARAMS_MPEG2
+
 2020-12-03 - xx - lavu 56.62.100 - timecode.h
   Add av_timecode_init_from_components.
 
diff --git a/libavcodec/h263dec.c b/libavcodec/h263dec.c
index 32e26a57de..eb8b21e5ed 100644
--- a/libavcodec/h263dec.c
+++ b/libavcodec/h263dec.c
@@ -28,6 +28,8 @@
 #define UNCHECKED_BITSTREAM_READER 1
 
 #include "libavutil/cpu.h"
+#include "libavutil/video_enc_params.h"
+
 #include "avcodec.h"
 #include "error_resilience.h"
 #include "flv.h"
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 6d0e9fc7ed..87c6aacbeb 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -32,6 +32,7 @@
 #include "libavutil/imgutils.h"
 #include "libavutil/internal.h"
 #include "libavutil/stereo3d.h"
+#include "libavutil/video_enc_params.h"
 
 #include "avcodec.h"
 #include "bytestream.h"
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 13c11ec492..e495e315e6 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -220,6 +220,7 @@ static int alloc_picture_tables(AVCodecContext *avctx, 
Picture *pic, int encodin
 
 pic->alloc_mb_width  = mb_width;
 pic->alloc_mb_height = mb_height;
+pic->alloc_mb_stride = mb_stride;
 
 return 0;
 }
@@ -346,6 +347,7 @@ int ff_update_picture_tables(Picture *dst, Picture *src)
 
 dst->alloc_mb_width  = src->alloc_mb_width;
 dst->alloc_mb_height = src->alloc_mb_height;
+dst->alloc_mb_stride = src->alloc_mb_stride;
 
 return 0;
 }
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index 2db3d6733a..4bcd666797 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -69,6 +69,7 @@ typedef struct Picture {
 
 int alloc_mb_width; ///< mb_width used to allocate tables
 int alloc_mb_height;///< mb_height used to allocate tables
+int alloc_mb_stride;///< mb_stride used to allocate tables
 
 AVBufferRef *mb_mean_buf;
 uint8_t *mb_mean;   ///< Table for MB luminance
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index c28d1adef7..8cc21920e6 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -32,6 +32,8 @@
 #include "libavutil/imgutils.h"
 #include "libavutil/internal.h"
 #include "libavutil/motion_vector.h"
+#include "libavutil/video_enc_params.h"
+
 #include "avcodec.h"
 #include "blockdsp.h"
 #include "h264chroma.h"
@@ -1425,14 +1427,33 @@ void ff_print_debug_info(MpegEncContext *s, Picture *p, 
AVFrame *pict)
 
 int ff_mpv_export_qp_table(MpegEncContext *s, AVFrame *f, Picture *p, int 
qp_type)
 {
-AVBufferRef *ref = av_buffer_ref(p->qscale_table_buf);
-int offset = 2*s->mb_stride + 1;
-if(!ref)
+AVVideoEncParams *par;
+int mult = (qp_type == FF_QSCALE_TYPE_MPEG1) ? 2 : 1;
+unsigned int nb_mb = p->alloc_mb_height * p->alloc_mb_width;
+unsigned int x, y;
+
+if (!(s->avctx->export_side_data & AV_CODEC_EXPORT_DATA_VIDEO_ENC_PARAMS))
+return 0;
+
+par = av_video_enc_params_create_side_data(f, AV_VIDEO_ENC_PARAMS_MPEG2, 
nb_mb);
+if (!par)
 return AVERROR(ENOMEM);
-av_assert0(ref->size >= offset + s->mb_stride * ((f->height+15)/16));
-ref->size -= offset;
-ref->data += offset;
-return av_frame_set_qp_table(f, ref, s->mb_stride, qp_type);
+
+for (y = 0; y < p->alloc_mb_height; y++)
+for (x = 0; x < p->alloc_mb_width; x++) {
+const unsigned int block_idx = y * p->alloc_mb_width + x;
+const unsigned int mb_xy = y * p->alloc_mb_stride + x;
+AVVideoBlockParams *b = av_video_enc_params_block(par, block_idx);
+
+b->src_x = x * 16;
+b->src_y = y * 16;
+b->w = 16;
+b->h = 16;
+
+b->delta_qp = p->qscale_table[mb_xy] * mult;
+}
+
+return 0;
 }
 
 static inline int hpel_motion_lowres(MpegEncContext *s,
diff --git a/libavcodec/rv34.c b/libavcodec/rv3

Re: [FFmpeg-devel] [PATCH 1/6] fate: Add dpx-probe test

2020-12-14 Thread Harry Mallon


> On 10 Dec 2020, at 23:41, Paul B Mahol  wrote:
> 
> I strongly disagree, make use of money project got.
> Limiting size of samples is not gonna to be productive at all.
> 
> On Fri, Dec 11, 2020 at 12:37 AM Carl Eugen Hoyos 
> wrote:
> 
>> Am Do., 10. Dez. 2020 um 13:22 Uhr schrieb Paul B Mahol >> :
>>> 
>>> I already uploaded the other file to servers.
>> 
>> We can still remove it.
>> 
>> Downloading the fate suite takes very long and it will get bigger no
>> matter the year.

Is there anything I can do to unblock this? I am very happy for the second 
smaller size/resolution sample to be used instead.

Harry
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Call for maintainers: vf_uspp, vf_mcdeint

2020-12-14 Thread Anton Khirnov
Quoting Michael Niedermayer (2020-12-14 00:52:06)
> On Sun, Dec 13, 2020 at 06:22:08PM +0100, Anton Khirnov wrote:
> > Quoting Michael Niedermayer (2020-12-13 15:03:19)
> > > On Sun, Dec 13, 2020 at 02:02:33PM +0100, Anton Khirnov wrote:
> > > > Quoting Paul B Mahol (2020-12-13 13:40:15)
> > > > > Why? Is it so hard to fix them work with latest API?
> > > > 
> > > > It is not exactly obvious, since coded_frame is gone. I suppose you
> > > > could instantiate an encoder and a decoder to work around that, but it
> > > > all seems terribly inefficient. Lavfi seems to have some ME code, so
> > > > perhaps that could be used for mcdeint. Or if not, maybe someone could
> > > > get motivated to port something from avisynth or vapoursynth. Similarly
> > > > for uspp, surely one can do a snow-like blur without requiring a whole
> > > > encoder.
> > > > 
> > > > In any case, seems to me like a good opportunity to find out whether
> > > > anyone cares enough about those filters to keep them alive. I don't
> > > > think we should keep code that nobody is willing to maintain.
> > > 
> > > I might do the minimal changes needed to keep these working when i 
> > > find the time and if noone else does. Certainly i would not be sad
> > > if someone else would do it before me ;)
> > > 
> > > Also if redesign happens, what looks interresting to me would be to
> > > be able to export the needed information from encoders.
> > > Factorizing code from one specific encoder so only it can be used
> > > is less general but could be done too of course
> > > 
> > > if OTOH encoders in general could export their internal buffers for 
> > > filters
> > > or debuging that seems more interresting. 
> > 
> > TBH I am very skeptical that this can be done in a clean and
> > maintainable way. 
> 
> why ?
> one could simply attach the decoded frame bitmap as side data to the
> packet. This seems at the surface at least not really require anything
> anywhere else. Its just like any other side data, just that it
> would be done only when requested by the user.
> I imagine this might be little more than a single call in a encoder
> with the AVFrame and AVPacket as arguments ...
> 
> 
> > Splitting off individual pieces and making them
> > reusable is a better approach.
> 
> Better for these 2 specific filters yes but that also makes it harder
> to change them to a different encoder or even encoder settings.
> 
> as the filters are currently, it would be reasonable easy to change them to
> a different encoder, experiment around with them and things like that.

I am not convinced that passing video through an entire encoder is a
meaningful filtering method, if one wants specific and well-defined
results. Not to mention it will most likely be incredibly slow.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avfilter: add stereoupmix

2020-12-14 Thread Paul B Mahol
On Mon, Dec 14, 2020 at 5:17 AM Lingjiang Fang 
wrote:

> On Wed,  9 Dec 2020 18:20:07 +0100
> Paul B Mahol  wrote:
>
> >Signed-off-by: Paul B Mahol 
> >---
> > doc/filters.texi |  34 
> > libavfilter/Makefile |   1 +
> > libavfilter/af_stereoupmix.c | 352 +++
> > libavfilter/allfilters.c |   1 +
> > 4 files changed, 388 insertions(+)
> > create mode 100644 libavfilter/af_stereoupmix.c
> >
> >diff --git a/doc/filters.texi b/doc/filters.texi
> >index 9dfe95f40d..325753c8f4 100644
> >--- a/doc/filters.texi
> >+++ b/doc/filters.texi
> >@@ -5817,6 +5817,40 @@ Convert M/S signal to L/R:
> > @end example
> > @end itemize
> >
> >+@section stereoupmix
> >+Upmix stereo audio.
> >+
> >+This filter upmixes stereo audio using adaptive panning method.
>
> As far as I know, we have a filter surround has similar function,
> can you describe the difference between these two filter.
>
> sorry if asked stupid question
>

This one is zero latency filter and also is more than 10x faster
and done completely in time-domain.

For surround filter, you give better output only with overlap option set to
higher value >= 0.875


>
> >+
> >+The filter accepts the following options:
> >+
> >+@table @option
> >+@item upmix
> >+Set the upmix mode. Can be one of the following:
> >+@table @samp
> >+@item 2.1
> >+@item 3.0
> >+@item 3.1
> >+@item 4.0
> >+@item 4.1
> >+@item 5.0
> >+@item 5.1
> >+@end table
> >+Default value is @var{5.1}.
> >+
> >+@item center
> >+Set the center audio strength. Allowed range is from 0.0 to 1.0.
> >+Default value is 0.5.
> >+
> >+@item ambience
> >+Set the ambience audio strength. Allowed range is from 0.0 to 1.0.
> >+Default value is 0.5.
> >+@end table
> >+
> >+@subsection Commands
> >+
> >+This filter supports the all above options except @code{upmix} as
> >@ref{commands}. +
> > @section stereowiden
> >
> > This filter enhance the stereo effect by suppressing signal common to
> > both
> >diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> >index 1af85a71a0..a9d76a1eaf 100644
> >--- a/libavfilter/Makefile
> >+++ b/libavfilter/Makefile
> >@@ -145,6 +145,7 @@ OBJS-$(CONFIG_SILENCEREMOVE_FILTER)  +=
> >af_silenceremove.o
> > OBJS-$(CONFIG_SOFALIZER_FILTER)  += af_sofalizer.o
> > OBJS-$(CONFIG_SPEECHNORM_FILTER) += af_speechnorm.o
> > OBJS-$(CONFIG_STEREOTOOLS_FILTER)+= af_stereotools.o
> >+OBJS-$(CONFIG_STEREOUPMIX_FILTER)+= af_stereoupmix.o
> > OBJS-$(CONFIG_STEREOWIDEN_FILTER)+= af_stereowiden.o
> > OBJS-$(CONFIG_SUPEREQUALIZER_FILTER) += af_superequalizer.o
> > OBJS-$(CONFIG_SURROUND_FILTER)   += af_surround.o
> >diff --git a/libavfilter/af_stereoupmix.c
> >b/libavfilter/af_stereoupmix.c new file mode 100644
> >index 00..813f21b088
> >--- /dev/null
> >+++ b/libavfilter/af_stereoupmix.c
> >@@ -0,0 +1,352 @@
> >+/*
> >+ * Copyright (c) 2020 Paul B Mahol
> >+ *
> >+ * This file is part of FFmpeg.
> >+ *
> >+ * FFmpeg is free software; you can redistribute it and/or
> >+ * modify it under the terms of the GNU Lesser General Public
> >+ * License as published by the Free Software Foundation; either
> >+ * version 2.1 of the License, or (at your option) any later version.
> >+ *
> >+ * FFmpeg is distributed in the hope that it will be useful,
> >+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
> >+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> >+ * Lesser General Public License for more details.
> >+ *
> >+ * You should have received a copy of the GNU Lesser General Public
> >+ * License along with FFmpeg; if not, write to the Free Software
> >+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
> >02110-1301 USA
> >+ */
> >+
> >+#include "libavutil/avassert.h"
> >+#include "libavutil/channel_layout.h"
> >+#include "libavutil/opt.h"
> >+#include "avfilter.h"
> >+#include "audio.h"
> >+#include "formats.h"
> >+
> >+enum UpmixMode {
> >+UPMIX_2_1,
> >+UPMIX_3_0,
> >+UPMIX_3_1,
> >+UPMIX_4_0,
> >+UPMIX_4_1,
> >+UPMIX_5_0,
> >+UPMIX_5_1,
> >+NB_UPMIX
> >+};
> >+
> >+typedef struct StereoUpmixContext {
> >+const AVClass *class;
> >+
> >+int upmix;
> >+float center;
> >+float ambience;
> >+
> >+uint64_t out_layout;
> >+
> >+float fl, fr;
> >+float y;
> >+float pk;
> >+float wl, wr;
> >+
> >+float a[2];
> >+float b[3];
> >+float z[2];
> >+} StereoUpmixContext;
> >+
> >+#define OFFSET(x) offsetof(StereoUpmixContext, x)
> >+#define FLAGS AV_OPT_FLAG_AUDIO_PARAM|AV_OPT_FLAG_FILTERING_PARAM
> >+#define TFLAGS
>
> >AV_OPT_FLAG_AUDIO_PARAM|AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_RUNTIME_PARAM
> >+ +static const AVOption stereoupmix_options[] = {
> >+{ "upmix",  "set upmix mode",  OFFSET(upmix),
> >AV_OPT_TYPE_INT,   {.i64=UPMIX_5_1}, 0, NB_UPMIX-1, FLAGS, "upmix" },
> >+{ "2.1",NULL,  0,
> >AV_OPT_TYPE

Re: [FFmpeg-devel] [PATCH v2] fate/hevc-conformance: add clip for persistent_rice_adaptation_enabled_flag

2020-12-14 Thread Guangxin Xu
Hi Lingjie
thanks for the review.
The stream has the feature but not used.

the decoded yuv's md5 is 3c94b5ebc0aed0abae4e619b9dcca9cc
it's matched with the WPP_HIGH_TP_444_8BIT_RExt_Apple_2.md5

thanks

On Thu, Dec 10, 2020 at 6:14 PM Linjie Fu 
wrote:

> Hi Guangxin,
>
> On Sun, Nov 15, 2020 at 11:07 AM Xu Guangxin  wrote:
> >
> > you can download it from:
> >
> https://www.itu.int/wftp3/av-arch/jctvc-site/bitstream_exchange/draft_conformance/RExt/WPP_HIGH_TP_444_8BIT_RExt_Apple_2.zip
> >
> > Signed-off-by: Xu Guangxin 
> > ---
> >  tests/fate/hevc.mak   | 1 +
> >  .../hevc-conformance-WPP_HIGH_TP_444_8BIT_RExt_Apple_2| 8 
> >  2 files changed, 9 insertions(+)
> >  create mode 100644
> tests/ref/fate/hevc-conformance-WPP_HIGH_TP_444_8BIT_RExt_Apple_2
> >
> > diff --git a/tests/fate/hevc.mak b/tests/fate/hevc.mak
> > index 9a32a7d74c..97edb49781 100644
> > --- a/tests/fate/hevc.mak
> > +++ b/tests/fate/hevc.mak
> > @@ -141,6 +141,7 @@ HEVC_SAMPLES =  \
> >  WPP_D_ericsson_MAIN_2   \
> >  WPP_E_ericsson_MAIN_2   \
> >  WPP_F_ericsson_MAIN_2   \
> > +WPP_HIGH_TP_444_8BIT_RExt_Apple_2 \
> >
> >  HEVC_SAMPLES_10BIT =\
> >  DBLK_A_MAIN10_VIXS_3\
> > diff --git
> a/tests/ref/fate/hevc-conformance-WPP_HIGH_TP_444_8BIT_RExt_Apple_2
> b/tests/ref/fate/hevc-conformance-WPP_HIGH_TP_444_8BIT_RExt_Apple_2
> > new file mode 100644
> > index 00..fcb1d2894a
> > --- /dev/null
> > +++ b/tests/ref/fate/hevc-conformance-WPP_HIGH_TP_444_8BIT_RExt_Apple_2
> > @@ -0,0 +1,8 @@
> > +#tb 0: 1/25
> > +#media_type 0: video
> > +#codec_id 0: rawvideo
> > +#dimensions 0: 1024x768
> > +#sar 0: 0/1
> > +0,  0,  0,1,  1179648, 0x78e55a69
> > +0,  1,  1,1,  1179648, 0x5babb3cb
> > +0,  2,  2,1,  1179648, 0x65935648
> > --
> > 2.17.1
> >
>
> For this sample, native hevc decoder doesn't support "High Throughput
> 4:4:4" (profile_idc = 5) yet:
>
>  $ ffmpeg -i
> fate-suite/hevc-conformance/WPP_HIGH_TP_444_8BIT_RExt_Apple_2.bit
> [hevc @ 0x7fcf04818800] Unknown HEVC profile: 5
> [hevc @ 0x7fcf04818800] high_precision_offsets_enabled_flag not yet
> implemented
> [hevc @ 0x7fcf04818800] Unknown HEVC profile: 5
> [hevc @ 0x7fcf04818800] high_precision_offsets_enabled_flag not yet
> implemented
> [hevc @ 0x7fcf04818800] Unknown HEVC profile: 5
> [hevc @ 0x7fcf04818800] high_precision_offsets_enabled_flag not yet
> implemented
>
> Hence the md5 result seems to be different from the reference in
> WPP_HIGH_TP_444_8BIT_RExt_Apple_2.md5
>
> - linjie
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".