Re: [FFmpeg-devel] [PATCH] lavc/vvc: Use pps->{width, height} over sps->{width, height}
On Thu, Feb 15, 2024 at 7:54 PM wrote: > From: Frank Plowman > > The PPS should be used instead of the SPS to get the current picture's > dimensions. Using the SPS can cause issues if the resolution changes > mid-sequence. In particular, it was leading to invalid memory accesses > if the resolution decreased. > > Patch replaces sps->{width,height} with pps->{width,height}. It also > removes sps->{width,height}, as these are no longer used anywhere. > > Fixes crash when decoding DVB V&V test sequence > VVC_HDR_UHDTV1_ClosedGOP_Max3840x2160_50fps_HLG10_res_change_without_RPR > > Signed-off-by: Frank Plowman > > 👍, Thank you, Frank. applied ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 1/2] avcodec/s302m: enable non-PCM decoding
On 2024-02-16 01:56 am, Kieran Kunhya wrote: On Thu, 15 Feb 2024 at 16:48, Gyan Doshi wrote: On 2024-02-15 09:40 pm, Anton Khirnov wrote: Quoting Gyan Doshi (2024-02-15 13:31:59) On 2024-02-15 04:17 pm, Anton Khirnov wrote: Hi, sorry for the delay, I've been busy fixing things for the release Quoting Gyan Doshi via ffmpeg-devel (2024-01-29 05:00:33) On 2024-01-28 04:24 pm, Anton Khirnov wrote: a) it would mean essentially inlining this decoder in the demuxer. Why is that a problem? This decoder seems like it shouldn't be a decoder. I agree with Andreas that this seems like it's a demuxer pretending to be a decoder. This module transforms the entire raw payload data to generate its output, even if the syntax is simple which essentially makes it a de-coder. The de-multiplexer aspect of multiple streams is an academic possibility allowed by the standard but not seen in any sample which makes me suspect it's used for carriage between broadcast facilities rather than something ever sent to an OTT provider, let alone an end user. If it dynamically generates nested decoders, then it's not a proper codec in our model. It should be either a part of the demuxer, or a bitstream filter (possibly inserted automatically by the demuxer). s302m is a hybrid creature and does not slot cleanly into any role. So there is no theoretically proper place for this component - any choice is a least-out-of-place accommodation. But it is much more out of place inside a demuxer. Analyzing packet payload and then manipulating that entire payload is much closer to a decoding role than data chunk extraction for packetization. I don't see why specifically this property should be the one distinguishing demuxers from decoders, it sounds pretty arbitrary to me. Many demuxers apply transformations of far higher complexity to the bytestream before exporting it, e.g. in matroska the packet data may be compressed, laced, etc. And the stream extracted from the container is meant to be SMPTE ST 302 not PCM* or Dolby-E/AC-3..etc, "meant to be"? By whom? The point of libavformat is to abstract away the differences between containers as much as is reasonably feasible, and export the data in the format most useful to the caller for decoding or other processing. which will both misrepresent what the container carries Why should the caller care? and possibly discard S-ADM metadata, if present, in the packet. Why could that not be exported as side data? A bsf in principle would work but in practice, can't as Andreas clarified that bsfs can't set or alter codec_id after init. And resetting the codec id requires packet inspection. There are two possibilities then - either extend the BSF API to support multiple output streams, or implement it inside libavformat as a post-demuxer hook called in the same place as parsing. Nested decoders are used without issue in components like imm5 or ftr (upto 64 nested decoders!) among others. There's no breaking of new ground here. Nested decoders are certainly not "without issue" - they are a constant source of issues, since implementing nesting properly is very tricky. I am fairly sure most nested decoders we have are subtly broken in various ways. This patch facilitates a certain productive use of ffmpeg with respect to processing of live inputs that wasn't possible earlier, and which currently is being used successfully by multiple people over the past few weeks. It applies a processing model already implemented in multiple other decoders for a number of years. I haven't seen many reports of issues with them. And surely something being 'a constant source of issues' would be a lot more than 'subtly broken' as you describe them.You're the only one who has objected on architectural grounds and this looks to be a fundamental disagreement. If you are blocking this patch, do acknowledge here within 24 hours and we can send this to the TC else I'll push it after that period. Dolby E can exist in other containers apart from S302M. Raising the TC is premature. This patch only concerns s302m and sets up a framework for non-PCM decode. Dolby-E is incidental. Regards, Gyan ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 2/2] avcodec/speexdec: fix setting frame_size from extradata
On Fri, Jan 19, 2024 at 10:47:30PM -0300, James Almer wrote: > Finishes fixing vp5/potter512-400-partial.avi > > The fate-matroska-ms-mode test ref is updated to reflect that the Speex > decoder > can now read the stream. > > Signed-off-by: James Almer > --- > libavcodec/speexdec.c | 4 +--- > tests/ref/fate/matroska-ms-mode | 2 +- > 2 files changed, 2 insertions(+), 4 deletions(-) Theres plenty of code remaining after this change that assumes NB_FRAME_SIZE are available and writes that also. There was a testcase reporrted that causes this and many others ill forward the testcase. ==18747== Invalid write of size 4 ==18747==at 0xD1CFC3: sb_decode (speexdec.c:1260) ==18747==by 0xD1E5CE: speex_decode_frame (speexdec.c:1557) ==18747==by 0x998846: decode_simple_internal (decode.c:430) ==18747==by 0x998DD7: decode_simple_receive_frame (decode.c:609) ==18747==by 0x998F47: decode_receive_frame_internal (decode.c:637) ==18747==by 0x99930C: avcodec_send_packet (decode.c:734) ==18747==by 0x669D2F: try_decode_frame (demux.c:2126) ==18747==by 0x66CAA0: avformat_find_stream_info (demux.c:2809) ==18747==by 0x24E9C2: ifile_open (ffmpeg_demux.c:1663) ==18747==by 0x2755BE: open_files (ffmpeg_opt.c:1333) ==18747==by 0x275780: ffmpeg_parse_options (ffmpeg_opt.c:1373) ==18747==by 0x289702: main (ffmpeg.c:1032) ==18747== Address 0x16a170c0 is 0 bytes after a block of size 1,536 alloc'd ==18747==at 0x4C33E76: memalign (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==18747==by 0x4C33F91: posix_memalign (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==18747==by 0x13E3FE6: av_malloc (mem.c:105) ==18747==by 0x13BFBFD: av_buffer_alloc (buffer.c:82) ==18747==by 0x13C0645: pool_alloc_buffer (buffer.c:362) ==18747==by 0x13C07C6: av_buffer_pool_get (buffer.c:401) ==18747==by 0xA46052: audio_get_buffer (get_buffer.c:203) ==18747==by 0xA4648D: avcodec_default_get_buffer2 (get_buffer.c:278) ==18747==by 0x99B967: ff_get_buffer (decode.c:1673) ==18747==by 0xD1E52E: speex_decode_frame (speexdec.c:1552) ==18747==by 0x998846: decode_simple_internal (decode.c:430) ==18747==by 0x998DD7: decode_simple_receive_frame (decode.c:609) ==18747==by 0x998F47: decode_receive_frame_internal (decode.c:637) ==18747==by 0x99930C: avcodec_send_packet (decode.c:734) ==18747==by 0x669D2F: try_decode_frame (demux.c:2126) ==18747==by 0x66CAA0: avformat_find_stream_info (demux.c:2809) ==18747==by 0x24E9C2: ifile_open (ffmpeg_demux.c:1663) ==18747==by 0x2755BE: open_files (ffmpeg_opt.c:1333) ==18747==by 0x275780: ffmpeg_parse_options (ffmpeg_opt.c:1373) ==18747==by 0x289702: main (ffmpeg.c:1032) thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Good people do not need laws to tell them to act responsibly, while bad people will find a way around the laws. -- Plato signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v5] lavc/libvpxenc: add screen-content-mode option
On Tue, Feb 13, 2024 at 7:26 PM James Zern wrote: > > On Mon, Feb 12, 2024 at 10:34 PM Dariusz Marcinkiewicz via > ffmpeg-devel wrote: > > > > This exposes VP8E_SET_SCREEN_CONTENT_MODE option from libvpx. > > > > Co-authored-by: Erik Språng > > Signed-off-by: Dariusz Marcinkiewicz > > --- > > doc/encoders.texi | 6 ++ > > libavcodec/libvpxenc.c | 13 + > > libavcodec/version.h | 2 +- > > 3 files changed, 20 insertions(+), 1 deletion(-) > > > > lgtm. I'll submit this soon if there are no other comments. > Applied. Thanks for the patch. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avcodec/jpeg2000dec: support of 2 fields in 1 AVPacket
> > I may be missing something from the previous discussion, but the > AV_FRAME_FLAG_INTERLACED should indicate when that is the case. > I am not familiar enough with j2k code to know if that flag is correctly > set or not. > > AV_FRAME_FLAG_INTERLACED signals two fields which are interleaved. What the OP wants is a single field in an AVFrame. (insert rant about why interlaced is evil blah blah). Kieran ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avcodec/jpeg2000dec: support of 2 fields in 1 AVPacket
On Thu, Feb 15, 2024 at 4:02 PM Jerome Martinez wrote: > In other words, I would like to know if AVFrame is intended at long term > to handle also fields in addition to frames, and if so is there a way to > signal that the AVFrame structure actually contains a field > I may be missing something from the previous discussion, but the AV_FRAME_FLAG_INTERLACED should indicate when that is the case. I am not familiar enough with j2k code to know if that flag is correctly set or not. -- Vittorio ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] [mov] Avoid OOM for invalid STCO / CO64 constructions.
On Thu, Feb 15, 2024 at 12:07:05PM -0800, Dale Curtis wrote: > On Mon, Feb 5, 2024 at 12:07 PM Michael Niedermayer > wrote: > > > assuming atom.size is an arbitrary 64bit value > > then the value of FFMIN() is also 64bit but entries is unsigned 32bit, > > this truncation > > would allow setting entries to values outside whats expected from FFMIN() > > also we seem to disalllow entries == 0 before this > > and its maybe possible to set entries = 0 here, bypassing the == 0 check > > before > > > Thanks. I've moved the clamp up to before the zero check. The only way a > bad 64-bit value could get in is if atom.size < 8, which I didn't think was > possible, but I've added a FFMAX(0,) there too. [...] > +FFMIN(avio_rb32(pb), > + FFMAX(0, (atom.size - 8) / > + (atom.type == MKTAG('s', 't', 'c', 'o') ? 4 : > 8))); FFMIN/MAX can evaluate their arguments multiple times so avio_rb32() might be executed more than once thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB The real ebay dictionary, page 1 "Used only once"- "Some unspecified defect prevented a second use" "In good condition" - "Can be repaird by experienced expert" "As is" - "You wouldnt want it even if you were payed for it, if you knew ..." signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 5/5] avformat/mov: rework ff_mov_read_chnl
On Thu, 15 Feb 2024, Anton Khirnov wrote: Quoting Marton Balint (2024-02-12 22:15:37) - native order, and that a single channel only appears once was always assumed for less than 64 channels, obviously this was incorrect Could you add a test for the case where it's not true? Actually fate-mov-mp4-pcm-float should cover this if I change the channel layout there to something which is not representible as a native layout. Regards, Marton ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 1/2] avcodec/s302m: enable non-PCM decoding
On Thu, 15 Feb 2024 at 16:48, Gyan Doshi wrote: > > > On 2024-02-15 09:40 pm, Anton Khirnov wrote: > > Quoting Gyan Doshi (2024-02-15 13:31:59) > >> On 2024-02-15 04:17 pm, Anton Khirnov wrote: > >>> Hi, > >>> sorry for the delay, I've been busy fixing things for the release > >>> Quoting Gyan Doshi via ffmpeg-devel (2024-01-29 05:00:33) > On 2024-01-28 04:24 pm, Anton Khirnov wrote: > >> a) it would mean essentially inlining this decoder in the demuxer. > > Why is that a problem? This decoder seems like it shouldn't be a > > decoder. > > > > I agree with Andreas that this seems like it's a demuxer pretending > to > > be a decoder. > This module transforms the entire raw payload data to generate its > output, even if the syntax is simple which > essentially makes it a de-coder. The de-multiplexer aspect of multiple > streams is an academic possibility allowed > by the standard but not seen in any sample which makes me suspect it's > used for carriage between broadcast > facilities rather than something ever sent to an OTT provider, let > alone > an end user. > >>> If it dynamically generates nested decoders, then it's not a proper > >>> codec in our model. It should be either a part of the demuxer, or a > >>> bitstream filter (possibly inserted automatically by the demuxer). > >> s302m is a hybrid creature and does not slot cleanly into any role. So > >> there is no theoretically proper place for this component - any choice > >> is a least-out-of-place accommodation. > >> > >> But it is much more out of place inside a demuxer. Analyzing packet > >> payload and then manipulating that entire payload is much closer to a > >> decoding role than data chunk extraction for packetization. > > I don't see why specifically this property should be the one > > distinguishing demuxers from decoders, it sounds pretty arbitrary to me. > > Many demuxers apply transformations of far higher complexity to the > > bytestream before exporting it, e.g. in matroska the packet data may be > > compressed, laced, etc. > > > >> And the stream extracted from the container is meant to be SMPTE ST > >> 302 not PCM* or Dolby-E/AC-3..etc, > > "meant to be"? By whom? > > > > The point of libavformat is to abstract away the differences between > > containers as much as is reasonably feasible, and export the data in the > > format most useful to the caller for decoding or other processing. > > > >> which will both misrepresent what the container carries > > Why should the caller care? > > > >> and possibly discard S-ADM metadata, if present, in the packet. > > Why could that not be exported as side data? > > > >> A bsf in principle would work but in practice, can't as Andreas > >> clarified that bsfs can't set or alter codec_id after init. And > >> resetting the codec id requires packet inspection. > > There are two possibilities then - either extend the BSF API to support > > multiple output streams, or implement it inside libavformat as a > > post-demuxer hook called in the same place as parsing. > > > >> Nested decoders are used without issue in components like imm5 or ftr > >> (upto 64 nested decoders!) among others. There's no breaking of new > >> ground here. > > Nested decoders are certainly not "without issue" - they are a constant > > source of issues, since implementing nesting properly is very tricky. I > > am fairly sure most nested decoders we have are subtly broken in various > > ways. > > This patch facilitates a certain productive use of ffmpeg with respect > to processing of live inputs that wasn't possible earlier, > and which currently is being used successfully by multiple people over > the past few weeks. > It applies a processing model already implemented in multiple other > decoders for a number of years. I haven't seen many reports > of issues with them. And surely something being 'a constant source of > issues' would be a lot more than 'subtly broken' as you describe > them.You're the only one who has objected on architectural grounds and > this looks to be a fundamental disagreement. > If you are blocking this patch, do acknowledge here within 24 hours and > we can send this to the TC else I'll push it after that period. > Dolby E can exist in other containers apart from S302M. Raising the TC is premature. Kieran ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] [mov] Avoid OOM for invalid STCO / CO64 constructions.
On Mon, Feb 5, 2024 at 12:07 PM Michael Niedermayer wrote: > assuming atom.size is an arbitrary 64bit value > then the value of FFMIN() is also 64bit but entries is unsigned 32bit, > this truncation > would allow setting entries to values outside whats expected from FFMIN() > also we seem to disalllow entries == 0 before this > and its maybe possible to set entries = 0 here, bypassing the == 0 check > before Thanks. I've moved the clamp up to before the zero check. The only way a bad 64-bit value could get in is if atom.size < 8, which I didn't think was possible, but I've added a FFMAX(0,) there too. - dale From db3e9ffc364cc94cb3a72696d4d4858af6abcc42 Mon Sep 17 00:00:00 2001 From: Dale Curtis Date: Fri, 2 Feb 2024 20:49:44 + Subject: [PATCH] [mov] Avoid OOM for invalid STCO / CO64 constructions. The `entries` value is read directly from the stream and used to allocate memory. This change clamps `entries` to however many are possible in the remaining atom or file size (whichever is smallest). Fixes https://crbug.com/1429357 Signed-off-by: Dale Curtis --- libavformat/mov.c | 8 +++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/libavformat/mov.c b/libavformat/mov.c index af95e1f662..1e4850fe9f 100644 --- a/libavformat/mov.c +++ b/libavformat/mov.c @@ -2228,7 +2228,12 @@ static int mov_read_stco(MOVContext *c, AVIOContext *pb, MOVAtom atom) avio_r8(pb); /* version */ avio_rb24(pb); /* flags */ -entries = avio_rb32(pb); +// Clamp allocation size for `chunk_offsets` -- don't throw an error for an +// invalid count since the EOF path doesn't throw either. +entries = +FFMIN(avio_rb32(pb), + FFMAX(0, (atom.size - 8) / + (atom.type == MKTAG('s', 't', 'c', 'o') ? 4 : 8))); if (!entries) return 0; @@ -2237,6 +2242,7 @@ static int mov_read_stco(MOVContext *c, AVIOContext *pb, MOVAtom atom) av_log(c->fc, AV_LOG_WARNING, "Ignoring duplicated STCO atom\n"); return 0; } + av_free(sc->chunk_offsets); sc->chunk_count = 0; sc->chunk_offsets = av_malloc_array(entries, sizeof(*sc->chunk_offsets)); -- 2.44.0.rc0.258.g7320e95886-goog ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2] {avcodec, tests}: rename the bundled Mesa AV1 vulkan video headers
Feb 15, 2024, 20:18 by jee...@gmail.com: > This together with adjusting the inclusion define allows for the > build to not fail with latest Vulkan-Headers that contain the > stabilized Vulkan AV1 decoding definitions. > > Compilation fails currently as the AV1 header is getting included > via hwcontext_vulkan.h -> -> vulkan_core.h, which > finally includes vk_video/vulkan_video_codec_av1std.h and the decode > header, leading to the bundled header to never defining anything > due to the inclusion define being the same. > > This fix is imperfect, as it leads to additional re-definition > warnings for things such as > VK_STD_VULKAN_VIDEO_CODEC_AV1_DECODE_SPEC_VERSION. , but it is > not clear how to otherwise have the bundled version trump the > actually standardized one for a short-term compilation fix. > --- > libavcodec/Makefile | 4 ++-- > libavcodec/vulkan_video.h | 4 ++-- > ...v1std_decode.h => vulkan_video_codec_av1std_decode_mesa.h} | 4 ++-- > ..._video_codec_av1std.h => vulkan_video_codec_av1std_mesa.h} | 4 ++-- > tests/ref/fate/source | 4 ++-- > 5 files changed, 10 insertions(+), 10 deletions(-) > rename libavcodec/{vulkan_video_codec_av1std_decode.h => > vulkan_video_codec_av1std_decode_mesa.h} (89%) > rename libavcodec/{vulkan_video_codec_av1std.h => > vulkan_video_codec_av1std_mesa.h} (99%) > > diff --git a/libavcodec/Makefile b/libavcodec/Makefile > index 470d7cb9b1..09ae5270b3 100644 > --- a/libavcodec/Makefile > +++ b/libavcodec/Makefile > @@ -1262,7 +1262,7 @@ SKIPHEADERS+= %_tablegen.h > \ > aacenc_quantization.h \ > aacenc_quantization_misc.h\ > bitstream_template.h \ > - vulkan_video_codec_av1std.h \ > + vulkan_video_codec_av1std_mesa.h \ > $(ARCH)/vpx_arith.h \ > > SKIPHEADERS-$(CONFIG_AMF) += amfenc.h > @@ -1285,7 +1285,7 @@ SKIPHEADERS-$(CONFIG_XVMC) += xvmc.h > SKIPHEADERS-$(CONFIG_VAAPI)+= vaapi_decode.h vaapi_hevc.h > vaapi_encode.h > SKIPHEADERS-$(CONFIG_VDPAU)+= vdpau.h vdpau_internal.h > SKIPHEADERS-$(CONFIG_VIDEOTOOLBOX) += videotoolbox.h vt_internal.h > -SKIPHEADERS-$(CONFIG_VULKAN) += vulkan.h vulkan_video.h > vulkan_decode.h vulkan_video_codec_av1std_decode.h > +SKIPHEADERS-$(CONFIG_VULKAN) += vulkan.h vulkan_video.h > vulkan_decode.h vulkan_video_codec_av1std_decode_mesa.h > SKIPHEADERS-$(CONFIG_V4L2_M2M) += v4l2_buffers.h v4l2_context.h > v4l2_m2m.h > SKIPHEADERS-$(CONFIG_ZLIB) += zlib_wrapper.h > > diff --git a/libavcodec/vulkan_video.h b/libavcodec/vulkan_video.h > index b28e3fe0bd..51f44dd543 100644 > --- a/libavcodec/vulkan_video.h > +++ b/libavcodec/vulkan_video.h > @@ -23,8 +23,8 @@ > #include "vulkan.h" > > #include > -#include "vulkan_video_codec_av1std.h" > -#include "vulkan_video_codec_av1std_decode.h" > +#include "vulkan_video_codec_av1std_mesa.h" > +#include "vulkan_video_codec_av1std_decode_mesa.h" > > #define CODEC_VER_MAJ(ver) (ver >> 22) > #define CODEC_VER_MIN(ver) ((ver >> 12) & ((1 << 10) - 1)) > diff --git a/libavcodec/vulkan_video_codec_av1std_decode.h > b/libavcodec/vulkan_video_codec_av1std_decode_mesa.h > similarity index 89% > rename from libavcodec/vulkan_video_codec_av1std_decode.h > rename to libavcodec/vulkan_video_codec_av1std_decode_mesa.h > index a697c00593..e2f37b4e6e 100644 > --- a/libavcodec/vulkan_video_codec_av1std_decode.h > +++ b/libavcodec/vulkan_video_codec_av1std_decode_mesa.h > @@ -14,8 +14,8 @@ > * limitations under the License. > */ > > -#ifndef VULKAN_VIDEO_CODEC_AV1STD_DECODE_H_ > -#define VULKAN_VIDEO_CODEC_AV1STD_DECODE_H_ 1 > +#ifndef VULKAN_VIDEO_CODEC_AV1STD_DECODE_MESA_H_ > +#define VULKAN_VIDEO_CODEC_AV1STD_DECODE_MESA_H_ 1 > > /* > ** This header is NOT YET generated from the Khronos Vulkan XML API Registry. > diff --git a/libavcodec/vulkan_video_codec_av1std.h > b/libavcodec/vulkan_video_codec_av1std_mesa.h > similarity index 99% > rename from libavcodec/vulkan_video_codec_av1std.h > rename to libavcodec/vulkan_video_codec_av1std_mesa.h > index c46236c457..c91589eee2 100644 > --- a/libavcodec/vulkan_video_codec_av1std.h > +++ b/libavcodec/vulkan_video_codec_av1std_mesa.h > @@ -14,8 +14,8 @@ > * limitations under the License. > */ > > -#ifndef VULKAN_VIDEO_CODEC_AV1STD_H_ > -#define VULKAN_VIDEO_CODEC_AV1STD_H_ 1 > +#ifndef VULKAN_VIDEO_CODEC_AV1STD_MESA_H_ > +#define VULKAN_VIDEO_CODEC_AV1STD_MESA_H_ 1 > > /* > ** This header is NOT YET generated from the Khronos Vulkan XML API Registry. > diff --git a/tests/ref/fate/source b/tests/ref/fate/source > index c575789dd5..8bb58b61f1 100644 > --- a/tests/ref/fate/source > +++ b/tests/ref/fate/source > @@ -23,8 +23,8 @@ compat/djgpp/math.h > compat/float/flo
[FFmpeg-devel] [PATCH v2] {avcodec, tests}: rename the bundled Mesa AV1 vulkan video headers
This together with adjusting the inclusion define allows for the build to not fail with latest Vulkan-Headers that contain the stabilized Vulkan AV1 decoding definitions. Compilation fails currently as the AV1 header is getting included via hwcontext_vulkan.h -> -> vulkan_core.h, which finally includes vk_video/vulkan_video_codec_av1std.h and the decode header, leading to the bundled header to never defining anything due to the inclusion define being the same. This fix is imperfect, as it leads to additional re-definition warnings for things such as VK_STD_VULKAN_VIDEO_CODEC_AV1_DECODE_SPEC_VERSION. , but it is not clear how to otherwise have the bundled version trump the actually standardized one for a short-term compilation fix. --- libavcodec/Makefile | 4 ++-- libavcodec/vulkan_video.h | 4 ++-- ...v1std_decode.h => vulkan_video_codec_av1std_decode_mesa.h} | 4 ++-- ..._video_codec_av1std.h => vulkan_video_codec_av1std_mesa.h} | 4 ++-- tests/ref/fate/source | 4 ++-- 5 files changed, 10 insertions(+), 10 deletions(-) rename libavcodec/{vulkan_video_codec_av1std_decode.h => vulkan_video_codec_av1std_decode_mesa.h} (89%) rename libavcodec/{vulkan_video_codec_av1std.h => vulkan_video_codec_av1std_mesa.h} (99%) diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 470d7cb9b1..09ae5270b3 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -1262,7 +1262,7 @@ SKIPHEADERS+= %_tablegen.h \ aacenc_quantization.h \ aacenc_quantization_misc.h\ bitstream_template.h \ - vulkan_video_codec_av1std.h \ + vulkan_video_codec_av1std_mesa.h \ $(ARCH)/vpx_arith.h \ SKIPHEADERS-$(CONFIG_AMF) += amfenc.h @@ -1285,7 +1285,7 @@ SKIPHEADERS-$(CONFIG_XVMC) += xvmc.h SKIPHEADERS-$(CONFIG_VAAPI)+= vaapi_decode.h vaapi_hevc.h vaapi_encode.h SKIPHEADERS-$(CONFIG_VDPAU)+= vdpau.h vdpau_internal.h SKIPHEADERS-$(CONFIG_VIDEOTOOLBOX) += videotoolbox.h vt_internal.h -SKIPHEADERS-$(CONFIG_VULKAN) += vulkan.h vulkan_video.h vulkan_decode.h vulkan_video_codec_av1std_decode.h +SKIPHEADERS-$(CONFIG_VULKAN) += vulkan.h vulkan_video.h vulkan_decode.h vulkan_video_codec_av1std_decode_mesa.h SKIPHEADERS-$(CONFIG_V4L2_M2M) += v4l2_buffers.h v4l2_context.h v4l2_m2m.h SKIPHEADERS-$(CONFIG_ZLIB) += zlib_wrapper.h diff --git a/libavcodec/vulkan_video.h b/libavcodec/vulkan_video.h index b28e3fe0bd..51f44dd543 100644 --- a/libavcodec/vulkan_video.h +++ b/libavcodec/vulkan_video.h @@ -23,8 +23,8 @@ #include "vulkan.h" #include -#include "vulkan_video_codec_av1std.h" -#include "vulkan_video_codec_av1std_decode.h" +#include "vulkan_video_codec_av1std_mesa.h" +#include "vulkan_video_codec_av1std_decode_mesa.h" #define CODEC_VER_MAJ(ver) (ver >> 22) #define CODEC_VER_MIN(ver) ((ver >> 12) & ((1 << 10) - 1)) diff --git a/libavcodec/vulkan_video_codec_av1std_decode.h b/libavcodec/vulkan_video_codec_av1std_decode_mesa.h similarity index 89% rename from libavcodec/vulkan_video_codec_av1std_decode.h rename to libavcodec/vulkan_video_codec_av1std_decode_mesa.h index a697c00593..e2f37b4e6e 100644 --- a/libavcodec/vulkan_video_codec_av1std_decode.h +++ b/libavcodec/vulkan_video_codec_av1std_decode_mesa.h @@ -14,8 +14,8 @@ * limitations under the License. */ -#ifndef VULKAN_VIDEO_CODEC_AV1STD_DECODE_H_ -#define VULKAN_VIDEO_CODEC_AV1STD_DECODE_H_ 1 +#ifndef VULKAN_VIDEO_CODEC_AV1STD_DECODE_MESA_H_ +#define VULKAN_VIDEO_CODEC_AV1STD_DECODE_MESA_H_ 1 /* ** This header is NOT YET generated from the Khronos Vulkan XML API Registry. diff --git a/libavcodec/vulkan_video_codec_av1std.h b/libavcodec/vulkan_video_codec_av1std_mesa.h similarity index 99% rename from libavcodec/vulkan_video_codec_av1std.h rename to libavcodec/vulkan_video_codec_av1std_mesa.h index c46236c457..c91589eee2 100644 --- a/libavcodec/vulkan_video_codec_av1std.h +++ b/libavcodec/vulkan_video_codec_av1std_mesa.h @@ -14,8 +14,8 @@ * limitations under the License. */ -#ifndef VULKAN_VIDEO_CODEC_AV1STD_H_ -#define VULKAN_VIDEO_CODEC_AV1STD_H_ 1 +#ifndef VULKAN_VIDEO_CODEC_AV1STD_MESA_H_ +#define VULKAN_VIDEO_CODEC_AV1STD_MESA_H_ 1 /* ** This header is NOT YET generated from the Khronos Vulkan XML API Registry. diff --git a/tests/ref/fate/source b/tests/ref/fate/source index c575789dd5..8bb58b61f1 100644 --- a/tests/ref/fate/source +++ b/tests/ref/fate/source @@ -23,8 +23,8 @@ compat/djgpp/math.h compat/float/float.h compat/float/limits.h libavcodec/bitstream_template.h -libavcodec/vulkan_video_codec_a
Re: [FFmpeg-devel] [PATCH 1/7] swscale/tests/swscale: Implement isALPHA() using AVPixFmtDescriptor
On Thu, Feb 15, 2024 at 07:20:32AM +0100, Anton Khirnov wrote: > I remember wondering why was this not run as a FATE test. the full reference file is > 100mb and the full test takes longer than the full fate test. This patchset makes it possibly to run random subsets of the test so its a step toward enabling this for fate i will apply it thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB You can kill me, but you cannot change the truth. signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 1/2] avcodec/s302m: enable non-PCM decoding
On 2024-02-15 09:40 pm, Anton Khirnov wrote: Quoting Gyan Doshi (2024-02-15 13:31:59) On 2024-02-15 04:17 pm, Anton Khirnov wrote: Hi, sorry for the delay, I've been busy fixing things for the release Quoting Gyan Doshi via ffmpeg-devel (2024-01-29 05:00:33) On 2024-01-28 04:24 pm, Anton Khirnov wrote: a) it would mean essentially inlining this decoder in the demuxer. Why is that a problem? This decoder seems like it shouldn't be a decoder. I agree with Andreas that this seems like it's a demuxer pretending to be a decoder. This module transforms the entire raw payload data to generate its output, even if the syntax is simple which essentially makes it a de-coder. The de-multiplexer aspect of multiple streams is an academic possibility allowed by the standard but not seen in any sample which makes me suspect it's used for carriage between broadcast facilities rather than something ever sent to an OTT provider, let alone an end user. If it dynamically generates nested decoders, then it's not a proper codec in our model. It should be either a part of the demuxer, or a bitstream filter (possibly inserted automatically by the demuxer). s302m is a hybrid creature and does not slot cleanly into any role. So there is no theoretically proper place for this component - any choice is a least-out-of-place accommodation. But it is much more out of place inside a demuxer. Analyzing packet payload and then manipulating that entire payload is much closer to a decoding role than data chunk extraction for packetization. I don't see why specifically this property should be the one distinguishing demuxers from decoders, it sounds pretty arbitrary to me. Many demuxers apply transformations of far higher complexity to the bytestream before exporting it, e.g. in matroska the packet data may be compressed, laced, etc. And the stream extracted from the container is meant to be SMPTE ST 302 not PCM* or Dolby-E/AC-3..etc, "meant to be"? By whom? The point of libavformat is to abstract away the differences between containers as much as is reasonably feasible, and export the data in the format most useful to the caller for decoding or other processing. which will both misrepresent what the container carries Why should the caller care? and possibly discard S-ADM metadata, if present, in the packet. Why could that not be exported as side data? A bsf in principle would work but in practice, can't as Andreas clarified that bsfs can't set or alter codec_id after init. And resetting the codec id requires packet inspection. There are two possibilities then - either extend the BSF API to support multiple output streams, or implement it inside libavformat as a post-demuxer hook called in the same place as parsing. Nested decoders are used without issue in components like imm5 or ftr (upto 64 nested decoders!) among others. There's no breaking of new ground here. Nested decoders are certainly not "without issue" - they are a constant source of issues, since implementing nesting properly is very tricky. I am fairly sure most nested decoders we have are subtly broken in various ways. This patch facilitates a certain productive use of ffmpeg with respect to processing of live inputs that wasn't possible earlier, and which currently is being used successfully by multiple people over the past few weeks. It applies a processing model already implemented in multiple other decoders for a number of years. I haven't seen many reports of issues with them. And surely something being 'a constant source of issues' would be a lot more than 'subtly broken' as you describe them.You're the only one who has objected on architectural grounds and this looks to be a fundamental disagreement. If you are blocking this patch, do acknowledge here within 24 hours and we can send this to the TC else I'll push it after that period. Regards, Gyan ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 1/2] avdevice: deprecate opengl outdev
Quoting J. Dekker (2024-02-13 08:34:25) > Signed-off-by: J. Dekker > --- > > These devices are fundamentally broken and usecases should be switched > away from output devices in general. Discussion in the thread tended towards > deprecation rather than immediate removal to give time for users to figure out > the best alternatives for their usecase. > > libavdevice/opengl_enc.c | 10 ++ > 1 file changed, 10 insertions(+) > > diff --git a/libavdevice/opengl_enc.c b/libavdevice/opengl_enc.c > index b2ac6eb16a..0c81ccc1c4 100644 > --- a/libavdevice/opengl_enc.c > +++ b/libavdevice/opengl_enc.c > @@ -224,6 +224,8 @@ typedef struct OpenGLContext { > int picture_height;///< Rendered height > int window_width; > int window_height; > + > +int warned; > } OpenGLContext; > > static const struct OpenGLFormatDesc { > @@ -1060,6 +1062,14 @@ static av_cold int opengl_write_header(AVFormatContext > *h) > AVStream *st; > int ret; > > +if (!opengl->warned) { > +av_log(opengl, AV_LOG_WARNING, > +"The opengl output device is deprecated. For monitoring purposes > in ffmpeg you can output to a file or use pipes and a video player.\n" Besides Marton's comment, this could also elaborate a little on why it is deprecated, e.g. "...due to being fundamentally incompatible with libavformat API". Also, could add a removal remind to version_major.h, similar to FF_CODEC_CRYSTAL_HD. Other than those, looks good to me. -- Anton Khirnov ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 1/2] avcodec/s302m: enable non-PCM decoding
Quoting Gyan Doshi (2024-02-15 13:31:59) > On 2024-02-15 04:17 pm, Anton Khirnov wrote: > > Hi, > > sorry for the delay, I've been busy fixing things for the release > > Quoting Gyan Doshi via ffmpeg-devel (2024-01-29 05:00:33) > >> On 2024-01-28 04:24 pm, Anton Khirnov wrote: > a) it would mean essentially inlining this decoder in the demuxer. > >>> Why is that a problem? This decoder seems like it shouldn't be a > >>> decoder. > >>> > >>> I agree with Andreas that this seems like it's a demuxer pretending to > >>> be a decoder. > >> This module transforms the entire raw payload data to generate its > >> output, even if the syntax is simple which > >> essentially makes it a de-coder. The de-multiplexer aspect of multiple > >> streams is an academic possibility allowed > >> by the standard but not seen in any sample which makes me suspect it's > >> used for carriage between broadcast > >> facilities rather than something ever sent to an OTT provider, let alone > >> an end user. > > If it dynamically generates nested decoders, then it's not a proper > > codec in our model. It should be either a part of the demuxer, or a > > bitstream filter (possibly inserted automatically by the demuxer). > > s302m is a hybrid creature and does not slot cleanly into any role. So > there is no theoretically proper place for this component - any choice > is a least-out-of-place accommodation. > > But it is much more out of place inside a demuxer. Analyzing packet > payload and then manipulating that entire payload is much closer to a > decoding role than data chunk extraction for packetization. I don't see why specifically this property should be the one distinguishing demuxers from decoders, it sounds pretty arbitrary to me. Many demuxers apply transformations of far higher complexity to the bytestream before exporting it, e.g. in matroska the packet data may be compressed, laced, etc. > And the stream extracted from the container is meant to be SMPTE ST > 302 not PCM* or Dolby-E/AC-3..etc, "meant to be"? By whom? The point of libavformat is to abstract away the differences between containers as much as is reasonably feasible, and export the data in the format most useful to the caller for decoding or other processing. > which will both misrepresent what the container carries Why should the caller care? > and possibly discard S-ADM metadata, if present, in the packet. Why could that not be exported as side data? > A bsf in principle would work but in practice, can't as Andreas > clarified that bsfs can't set or alter codec_id after init. And > resetting the codec id requires packet inspection. There are two possibilities then - either extend the BSF API to support multiple output streams, or implement it inside libavformat as a post-demuxer hook called in the same place as parsing. > Nested decoders are used without issue in components like imm5 or ftr > (upto 64 nested decoders!) among others. There's no breaking of new > ground here. Nested decoders are certainly not "without issue" - they are a constant source of issues, since implementing nesting properly is very tricky. I am fairly sure most nested decoders we have are subtly broken in various ways. > If you feel strongly about this, please refer this to the TC. The TC is supposed to be invoked as a last resort. -- Anton Khirnov ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2] lavc/texturedsp: require explicitly-set frame dimensions
Quoting Connor Worley (2024-02-15 06:44:38) > This change decouples the frame dimensions from avctx, which is useful > for DXV decoding, and fixes incorrect behavior in the existing > implementation. > > Tested with `make fate THREADS=7` and > `make fate THREADS=7 THREAD_TYPE=slice`. > > Signed-off-by: Connor Worley > --- Looks reasonable, especially the part removing an AVCodecContext on stack. -- Anton Khirnov ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avcodec/jpeg2000dec: support of 2 fields in 1 AVPacket
On 05/02/2024 01:19, Tomas Härdin wrote: [...] Which entry in the table would the provided file correspond to? To me it seems none of them fit. There's two fields, meaning two j2k codestreams, in each corresponding essence element KLV packet (I think, unless CP packets get reassembled somewhere else). Entry I2 seems closest but it specifies FULL_FRAME. I1 is otherwise tempting, but there SampleRate should equal the field rate whereas the file has SampleRate = 3/1001. Other examples I have (not shareable) with 2 jp2k pictures per KLV have identification from an old version of AmberFin iCR, I have no file with the I2 correctly signaled, with my first example it isI2 (2 fields per KLV) with I1 Header Metadata Property Values **but** with I2 essence container label which has a content byte (byte 15 of the UL) of 0x04 = I2. The AmberFin iCR files have the generic essence container label with content byte of 0x01 = FU (Unspecified) so for my main use case we could activate the search of the 2nd jp2k only if I2 is explicitly signaled by the essence container label but it would prevent to catch the 2nd field when this signaling is unspecified and buggy Frame layout + sample rate + edit rate. I agree that this is not what is defined in ST 422 in full, but note that frame layout and height are not required by ST 377 (only "best effort") so IMO we should not rely much on them, and at least we should handle what is in the wild, correct me if I am wrong but handling non conforming content seems an acceptable policy in FFmpeg (I think to e.g. DPX and non conforming EOLs of some scanners, their names are directly written in FFmpeg source code). Video Line Map is also best effort but without it it is not possible to know the field_order, I wonder what should be done in that case. Currently I rely on current implementation in FFmpeg for catching field_order and don't try to find the 2nd field if field_order is AV_FIELD_UNKNOWN (not important for me as all files I have have field_order related metadata). Also if I manually edit the MXF for having a conforming I2 property values, FFmpeg behaves same (still indicating half height and still silently discarding the 2nd field), so in my opinion the handling of 2 jp2k pictures per KLV is still relevant for handling correctly I2 conforming files and tolerating wrong property values may be relevant (checking essence container label only? to be discussed). On 03/02/2024 20:58, Tomas Härdin wrote: The fastest way, in a player, is probably to do it with a shader. That should be the least amount of copies and the most cache coherent. As far a I know the player is not aware that the AVFrame actually contains a field so it can not apply a shader or something else, which AVFrame field indicates that this is a a field to be interleaved with the next AVFrame before display? Currently for I1 files ffplay or VLC show double rate half height so it seems that they don't catch that AVFrame contains a field. On 03/02/2024 21:04, Tomas Härdin wrote: It should also be said that what this patch effectively does is silently convert SEPARATE_FIELDS to MIXED_FIELDS. What if I want to transcode J2K to lower bitrate but keep it SEPARATE_FIELDS? I don't get what is the expected behavior of FFmpeg: what is the meaning of "243" in "Stream #0:0: Video: jpeg2000, yuv422p10le(bottom coded first (swapped)), 720x243, lossless, SAR 9:20 DAR 4:3, 29.97 fps, 29.97 tbr, 29.97 tbn" My understanding is that a frame height is expected, never a field height, and there is no indication in the current output that 243 is a field height for both I1 & I2, so the "silent conversion" would be expected by the user in order to have a frame in the output and never a field, am I wrong? Also it seems that there is no way to signal that the outputted picture is a field and not a frame, and FFmpeg handles I1 (1 field per KLV) as if it ignores that this is a field and not a frame, so when a I1 file is converted to another format without an interleave filter manually added the output is an ugly flipping double rate half height content. Silently converting a field to a frame seems to me a worse behavior than silently converting SEPARATE_FIELDS to MIXED_FIELDS because the output is not what is expected by the person who created the file as well as the person watching the output of FFmpeg. What if I want to transcode J2K to lower bitrate but keep it SEPARATE_FIELDS? Interlacing the lines then encoding separately the fields? It is more a matter of default behavior (deinterlace or not) and who would need to apply a filter, my issue is that I see no way to signal in FFmpeg that "got_frame" means "got frame or field" and that AVFrame contains a field so I would prefer that the default behavior is to handle frames in AVFrame rather than fields. Is it acceptable?Additionally the MXF container indicates (for conforming files) that I2 edit rate is for a frame even
Re: [FFmpeg-devel] [PATCH 5/5] avformat/mov: rework ff_mov_read_chnl
Quoting Marton Balint (2024-02-12 22:15:37) > - native order, and that a single channel only appears once was always assumed > for less than 64 channels, obviously this was incorrect Could you add a test for the case where it's not true? -- Anton Khirnov ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 2/5] avutil/channel_layout: add AV_CHANNEL_ORDER_NB
Quoting Marton Balint (2024-02-13 21:27:34) > > > On Tue, 13 Feb 2024, James Almer wrote: > > > On 2/12/2024 6:15 PM, Marton Balint wrote: > >> Signed-off-by: Marton Balint > >> --- > >>libavutil/channel_layout.h | 4 > >>1 file changed, 4 insertions(+) > >> > >> diff --git a/libavutil/channel_layout.h b/libavutil/channel_layout.h > >> index b8bff6f365..db0c005e87 100644 > >> --- a/libavutil/channel_layout.h > >> +++ b/libavutil/channel_layout.h > >> @@ -146,6 +146,10 @@ enum AVChannelOrder { > >> * as defined in AmbiX format $ 2.1. > >> */ > >>AV_CHANNEL_ORDER_AMBISONIC, > >> +/** > >> + * Number of channel orders, not part of ABI/API > >> + */ > >> +AV_CHANNEL_ORDER_NB > >>}; > > > > Is it worth adding this to a public header just to limit a loop in a test? > > A > > loop that fwiw still depends on an array that needs to be updated with more > > names if you add new orders. > > Several other enums also have this. So API consistency can be considered > a more important factor. I'd be concerned that many callers don't undertand the implications of "not part of the ABI". Maybe we should rename all of them to FF_ prefix to make it more clear callers should not use these? -- Anton Khirnov ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avcodec/x86/simple_idct: Empty MMX state in ff_simple_idct_mmx
> Will apply this patch tomorrow unless there are objections. > LGTM ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avcodec/x86/simple_idct: Empty MMX state in ff_simple_idct_mmx
Andreas Rheinhardt: > We currently mostly do not empty the MMX state in our MMX > DSP functions; instead we only do so before code that might > be using x87 code. This is a violation of the System V i386 ABI > (and maybe of other ABIs, too): > "The CPU shall be in x87 mode upon entry to a function. Therefore, > every function that uses the MMX registers is required to issue an > emms or femms instruction after using MMX registers, before returning > or calling another function." (See 2.2.1 in [1]) > This patch does not intend to change all these functions to abide > by the ABI; it only does so for ff_simple_idct_mmx, as this > function can by called by external users, because it is exported > via the avdct API. Without this, the following fragment will > assert (on x86_32, as ff_simple_idct_mmx is not used on x64): > int16_t __attribute__ ((aligned (16))) block[64]; > AVDCT *dct = avcodec_dct_alloc(); > dct->idct_algo = FF_IDCT_AUTO; > avcodec_dct_init(dct); > dct->idct(block); > av_assert0_fpu(); > > [1]: > https://raw.githubusercontent.com/wiki/hjl-tools/x86-psABI/intel386-psABI-1.1.pdf > > Signed-off-by: Andreas Rheinhardt > --- > libavcodec/x86/simple_idct.asm | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/libavcodec/x86/simple_idct.asm b/libavcodec/x86/simple_idct.asm > index 982b2f0bbb..4139b6dab5 100644 > --- a/libavcodec/x86/simple_idct.asm > +++ b/libavcodec/x86/simple_idct.asm > @@ -845,6 +845,7 @@ INIT_MMX mmx > > cglobal simple_idct, 1, 2, 8, 128, block, t0 > IDCT > +emms > RET > > INIT_XMM sse2 Will apply this patch tomorrow unless there are objections. - Andreas ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 1/2] avcodec/s302m: enable non-PCM decoding
On 2024-02-15 04:17 pm, Anton Khirnov wrote: Hi, sorry for the delay, I've been busy fixing things for the release Quoting Gyan Doshi via ffmpeg-devel (2024-01-29 05:00:33) On 2024-01-28 04:24 pm, Anton Khirnov wrote: a) it would mean essentially inlining this decoder in the demuxer. Why is that a problem? This decoder seems like it shouldn't be a decoder. I agree with Andreas that this seems like it's a demuxer pretending to be a decoder. This module transforms the entire raw payload data to generate its output, even if the syntax is simple which essentially makes it a de-coder. The de-multiplexer aspect of multiple streams is an academic possibility allowed by the standard but not seen in any sample which makes me suspect it's used for carriage between broadcast facilities rather than something ever sent to an OTT provider, let alone an end user. If it dynamically generates nested decoders, then it's not a proper codec in our model. It should be either a part of the demuxer, or a bitstream filter (possibly inserted automatically by the demuxer). s302m is a hybrid creature and does not slot cleanly into any role. So there is no theoretically proper place for this component - any choice is a least-out-of-place accommodation. But it is much more out of place inside a demuxer. Analyzing packet payload and then manipulating that entire payload is much closer to a decoding role than data chunk extraction for packetization. And the stream extracted from the container is meant to be SMPTE ST 302 not PCM* or Dolby-E/AC-3..etc, which will both misrepresent what the container carries and possibly discard S-ADM metadata, if present, in the packet. With passthrough demuxing, a stream can be mapped for both decoding and streamcopying. A bsf in principle would work but in practice, can't as Andreas clarified that bsfs can't set or alter codec_id after init. And resetting the codec id requires packet inspection. Nested decoders are used without issue in components like imm5 or ftr (upto 64 nested decoders!) among others. There's no breaking of new ground here. If you feel strongly about this, please refer this to the TC. Regards, Gyan ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH] lavc/vvc: Use pps->{width, height} over sps->{width, height}
From: Frank Plowman The PPS should be used instead of the SPS to get the current picture's dimensions. Using the SPS can cause issues if the resolution changes mid-sequence. In particular, it was leading to invalid memory accesses if the resolution decreased. Patch replaces sps->{width,height} with pps->{width,height}. It also removes sps->{width,height}, as these are no longer used anywhere. Fixes crash when decoding DVB V&V test sequence VVC_HDR_UHDTV1_ClosedGOP_Max3840x2160_50fps_HLG10_res_change_without_RPR Signed-off-by: Frank Plowman --- libavcodec/vvc/vvc_ctu.c| 4 ++-- libavcodec/vvc/vvc_ctu.h| 2 +- libavcodec/vvc/vvc_filter.c | 48 ++--- libavcodec/vvc/vvc_mvs.c| 4 ++-- libavcodec/vvc/vvc_ps.c | 3 --- libavcodec/vvc/vvc_ps.h | 2 -- libavcodec/vvc/vvc_refs.c | 5 ++-- 7 files changed, 32 insertions(+), 36 deletions(-) diff --git a/libavcodec/vvc/vvc_ctu.c b/libavcodec/vvc/vvc_ctu.c index d166b16a19..36f98f5f2b 100644 --- a/libavcodec/vvc/vvc_ctu.c +++ b/libavcodec/vvc/vvc_ctu.c @@ -2415,8 +2415,8 @@ void ff_vvc_decode_neighbour(VVCLocalContext *lc, const int x_ctb, const int y_c VVCFrameContext *fc = lc->fc; const int ctb_size = fc->ps.sps->ctb_size_y; -lc->end_of_tiles_x = fc->ps.sps->width; -lc->end_of_tiles_y = fc->ps.sps->height; +lc->end_of_tiles_x = fc->ps.pps->width; +lc->end_of_tiles_y = fc->ps.pps->height; if (fc->ps.pps->ctb_to_col_bd[rx] != fc->ps.pps->ctb_to_col_bd[rx + 1]) lc->end_of_tiles_x = FFMIN(x_ctb + ctb_size, lc->end_of_tiles_x); if (fc->ps.pps->ctb_to_row_bd[ry] != fc->ps.pps->ctb_to_row_bd[ry + 1]) diff --git a/libavcodec/vvc/vvc_ctu.h b/libavcodec/vvc/vvc_ctu.h index 91b4ed14a1..ab3fac626d 100644 --- a/libavcodec/vvc/vvc_ctu.h +++ b/libavcodec/vvc/vvc_ctu.h @@ -85,7 +85,7 @@ /** * Value of the luma sample at position (x, y) in the 2D array tab. */ -#define SAMPLE(tab, x, y) ((tab)[(y) * s->sps->width + (x)]) +#define SAMPLE(tab, x, y) ((tab)[(y) * s->pps->width + (x)]) #define SAMPLE_CTB(tab, x, y) ((tab)[(y) * min_cb_width + (x)]) #define CTB(tab, x, y) ((tab)[(y) * fc->ps.pps->ctb_width + (x)]) diff --git a/libavcodec/vvc/vvc_filter.c b/libavcodec/vvc/vvc_filter.c index cebc02fd9f..df77e443f6 100644 --- a/libavcodec/vvc/vvc_filter.c +++ b/libavcodec/vvc/vvc_filter.c @@ -102,8 +102,8 @@ static void copy_ctb_to_hv(VVCFrameContext *fc, const uint8_t *src, const int c_idx, const int x_ctb, const int y_ctb, const int top) { const int ps = fc->ps.sps->pixel_shift; -const int w = fc->ps.sps->width >> fc->ps.sps->hshift[c_idx]; -const int h = fc->ps.sps->height >> fc->ps.sps->vshift[c_idx]; +const int w = fc->ps.pps->width >> fc->ps.sps->hshift[c_idx]; +const int h = fc->ps.pps->height >> fc->ps.sps->vshift[c_idx]; if (top) { /* top */ @@ -133,8 +133,8 @@ static void sao_copy_ctb_to_hv(VVCLocalContext *lc, const int rx, const int ry, const ptrdiff_t src_stride = fc->frame->linesize[c_idx]; const int ctb_size_h = ctb_size_y >> fc->ps.sps->hshift[c_idx]; const int ctb_size_v = ctb_size_y >> fc->ps.sps->vshift[c_idx]; -const int width= FFMIN(ctb_size_h, (fc->ps.sps->width >> fc->ps.sps->hshift[c_idx]) - x); -const int height = FFMIN(ctb_size_v, (fc->ps.sps->height >> fc->ps.sps->vshift[c_idx]) - y); +const int width= FFMIN(ctb_size_h, (fc->ps.pps->width >> fc->ps.sps->hshift[c_idx]) - x); +const int height = FFMIN(ctb_size_v, (fc->ps.pps->height >> fc->ps.sps->vshift[c_idx]) - y); const uint8_t *src = &fc->frame->data[c_idx][y * src_stride + (x << fc->ps.sps->pixel_shift)]; copy_ctb_to_hv(fc, src, src_stride, x, y, width, height, c_idx, rx, ry, top); } @@ -216,8 +216,8 @@ void ff_vvc_sao_filter(VVCLocalContext *lc, int x, int y) ptrdiff_t src_stride = fc->frame->linesize[c_idx]; int ctb_size_h = ctb_size_y >> fc->ps.sps->hshift[c_idx]; int ctb_size_v = ctb_size_y >> fc->ps.sps->vshift[c_idx]; -int width= FFMIN(ctb_size_h, (fc->ps.sps->width >> fc->ps.sps->hshift[c_idx]) - x0); -int height = FFMIN(ctb_size_v, (fc->ps.sps->height >> fc->ps.sps->vshift[c_idx]) - y0); +int width= FFMIN(ctb_size_h, (fc->ps.pps->width >> fc->ps.sps->hshift[c_idx]) - x0); +int height = FFMIN(ctb_size_v, (fc->ps.pps->height >> fc->ps.sps->vshift[c_idx]) - y0); int tab = sao_tab[(FFALIGN(width, 8) >> 3) - 1]; uint8_t *src = &fc->frame->data[c_idx][y0 * src_stride + (x0 << fc->ps.sps->pixel_shift)]; ptrdiff_t dst_stride; @@ -230,8 +230,8 @@ void ff_vvc_sao_filter(VVCLocalContext *lc, int x, int y) break; case SAO_EDGE: { -const int w = fc->ps.sps->width >> fc->ps.sps->hshift[c_idx]; -const int h = fc->ps.sps->h
Re: [FFmpeg-devel] [PATCH 1/2] avcodec/s302m: enable non-PCM decoding
Hi, sorry for the delay, I've been busy fixing things for the release Quoting Gyan Doshi via ffmpeg-devel (2024-01-29 05:00:33) > On 2024-01-28 04:24 pm, Anton Khirnov wrote: > >> a) it would mean essentially inlining this decoder in the demuxer. > > Why is that a problem? This decoder seems like it shouldn't be a > > decoder. > > > > I agree with Andreas that this seems like it's a demuxer pretending to > > be a decoder. > > This module transforms the entire raw payload data to generate its > output, even if the syntax is simple which > essentially makes it a de-coder. The de-multiplexer aspect of multiple > streams is an academic possibility allowed > by the standard but not seen in any sample which makes me suspect it's > used for carriage between broadcast > facilities rather than something ever sent to an OTT provider, let alone > an end user. If it dynamically generates nested decoders, then it's not a proper codec in our model. It should be either a part of the demuxer, or a bitstream filter (possibly inserted automatically by the demuxer). -- Anton Khirnov ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] Add protocol for Android content providers
Le jeu. 15 févr. 2024, 9:46 AM, Zhao Zhili a écrit : > > > 在 2024年2月15日,下午3:57,Matthieu Bouron 写道: > > > > On Thu, Feb 15, 2024 at 12:13:59PM +0800, Zhao Zhili wrote: > >> > >> > On Feb 14, 2024, at 06:50, Matthieu Bouron > wrote: > >>> > >>> Hi, > >>> > >>> On Android, content providers are used for accessing files through > shared > >>> mechanisms. One typical case would be an app willing to open a video > from > >>> Google Photos, gallery apps, TikTok, Instagram or some other providers. > >>> A content URI looks something like "content://authority/path/id", see: > >>> https://developer.android.com/reference/android/content/ContentUris > >>> > https://developer.android.com/guide/topics/providers/content-provider-basics > >>> > >>> It can currently be somehow managed through clumsy means such as using > a "fd:" > >>> filename and crafting a special AVOption, which also has the drawback > of > >>> requiring the third party to carry around opened file descriptors > (with the > >>> multiple opened file limitations implied). Custom AVIOContexts are > also an > >> > >> File descriptor is a general abstraction layer, it target more > platforms than > >> Android specific content provider. Android provided getFd() API since > API > >> level 12, I guess that’s the default method to deal with content > provider in > >> native code. It’s a few lines of code to get native fd in Java, but > dozens of code > >> in C with JNI, which is what this patchset done. > >> > >> For multiple opened file limitations issue, they can close the file > descriptor after > >> open. It’s unlikely to reach the limit in normal case without leak. > >> > >> I’m OK to provide this android_content_protocol helper if user requests. > > > > I've been doing this kind of work for 3/4 users (including myself) at > this > > point and have to do it another time, this is what motivated me to > propose > > this patchset. > > > >> > >>> option. Both options will have to deal with the JNI though and end > users will > >>> have to re-implement the same exact thing. > >> > >> User still need to deal with JNI with the new android_content_protocol, > more or > >> less, it’s unavoidable. > > > > The advantage I see of using this protocol is that the user only need to > > call av_jni_set_jvm() + av_jni_set_android_app_ctx() at the start of the > > application and FFmpeg will handle the content-uri transparently. This is > > especially helpful if the Android application rely on multiple libraries > > that in turn rely on FFmpeg to read medias. > > The url still need to be passed from Java to C via JNI, it’s not much > different compared to pass fd. > It's not that much different I agree. But let's say you have a rendering engine (in C) where you need to pass hundreds of media (from the user) to render a scene, each media is used at different time during the rendering. And Ffmpeg is not a direct dependency and can be called from different libraries/places used by the rendering engine. Calling av_jni_set_android_app_ctx() and you're done, you can pass the content URI to the engine (passing fd at this stage is not an option imho). You still need to convert the uri from java string to c before calling the c code, but it's a direct translation which is typically part of a binding. > > > >> > >>> > >>> This patchset addresses this by adding a content provider protocol, > which has > >>> an API fairly similar to fopen. Android 11 appears to provide something > >>> transparent within fopen(), but FFmpeg doesn't use it in the file > protocol, and > >>> Android < 11 are still widely used. > >>> > >>> The first part move the JNI infrastructure from avcodec to avutil (it > remains > >>> internally shared, there is little user implication), > >> > >> OK. JNI infrastructure should belong to avutil at the first place, so > hwcontext_mediacodec > >> and so on can use it. Unfortunately for those new avpriv_. > > > > What do you mean by "Unfortunately" ? Would you like to make the JNI API > > public ? > > I think it’s our target to reduce the number of avpriv API, not increase > it. Does duplicate the compile unit work in this case so we don’t need to > export the symbols? > Directly including ffjni.c from libavformat/file.c works. We still need to pass the application context though (could be added to avcodec/jni.h) > > > > [...] > > ___ > > ffmpeg-devel mailing list > > ffmpeg-devel@ffmpeg.org > > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > > > To unsubscribe, visit link above, or email > > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscri > > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org ht
Re: [FFmpeg-devel] Add protocol for Android content providers
> 在 2024年2月15日,下午3:57,Matthieu Bouron 写道: > > On Thu, Feb 15, 2024 at 12:13:59PM +0800, Zhao Zhili wrote: >> >> On Feb 14, 2024, at 06:50, Matthieu Bouron wrote: >>> >>> Hi, >>> >>> On Android, content providers are used for accessing files through shared >>> mechanisms. One typical case would be an app willing to open a video from >>> Google Photos, gallery apps, TikTok, Instagram or some other providers. >>> A content URI looks something like "content://authority/path/id", see: >>> https://developer.android.com/reference/android/content/ContentUris >>> https://developer.android.com/guide/topics/providers/content-provider-basics >>> >>> It can currently be somehow managed through clumsy means such as using a >>> "fd:" >>> filename and crafting a special AVOption, which also has the drawback of >>> requiring the third party to carry around opened file descriptors (with the >>> multiple opened file limitations implied). Custom AVIOContexts are also an >> >> File descriptor is a general abstraction layer, it target more platforms than >> Android specific content provider. Android provided getFd() API since API >> level 12, I guess that’s the default method to deal with content provider in >> native code. It’s a few lines of code to get native fd in Java, but dozens >> of code >> in C with JNI, which is what this patchset done. >> >> For multiple opened file limitations issue, they can close the file >> descriptor after >> open. It’s unlikely to reach the limit in normal case without leak. >> >> I’m OK to provide this android_content_protocol helper if user requests. > > I've been doing this kind of work for 3/4 users (including myself) at this > point and have to do it another time, this is what motivated me to propose > this patchset. > >> >>> option. Both options will have to deal with the JNI though and end users >>> will >>> have to re-implement the same exact thing. >> >> User still need to deal with JNI with the new android_content_protocol, more >> or >> less, it’s unavoidable. > > The advantage I see of using this protocol is that the user only need to > call av_jni_set_jvm() + av_jni_set_android_app_ctx() at the start of the > application and FFmpeg will handle the content-uri transparently. This is > especially helpful if the Android application rely on multiple libraries > that in turn rely on FFmpeg to read medias. The url still need to be passed from Java to C via JNI, it’s not much different compared to pass fd. > >> >>> >>> This patchset addresses this by adding a content provider protocol, which >>> has >>> an API fairly similar to fopen. Android 11 appears to provide something >>> transparent within fopen(), but FFmpeg doesn't use it in the file protocol, >>> and >>> Android < 11 are still widely used. >>> >>> The first part move the JNI infrastructure from avcodec to avutil (it >>> remains >>> internally shared, there is little user implication), >> >> OK. JNI infrastructure should belong to avutil at the first place, so >> hwcontext_mediacodec >> and so on can use it. Unfortunately for those new avpriv_. > > What do you mean by "Unfortunately" ? Would you like to make the JNI API > public ? I think it’s our target to reduce the number of avpriv API, not increase it. Does duplicate the compile unit work in this case so we don’t need to export the symbols? > > [...] > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscri ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] lead: support format 0x0
On Sun, Nov 12, 2023 at 11:41:23AM +1100, Peter Ross wrote: > Fixes ticket #10660. > --- > > thanks to the mysterious 'ami_stuff'. > > libavcodec/leaddec.c | 38 ++ > 1 file changed, 26 insertions(+), 12 deletions(-) > > diff --git a/libavcodec/leaddec.c b/libavcodec/leaddec.c > index fd2018256d..4f2cc03e65 100644 > --- a/libavcodec/leaddec.c > +++ b/libavcodec/leaddec.c > @@ -139,7 +139,7 @@ static int lead_decode_frame(AVCodecContext *avctx, > AVFrame * frame, > { > LeadContext *s = avctx->priv_data; > const uint8_t * buf = avpkt->data; > -int ret, format, yuv20p_half = 0, fields = 1, q, size; > +int ret, format, yuv420p_div = 1, yuv20p_half = 0, fields = 1, q, size; > GetBitContext gb; > int16_t dc_pred[3] = {0, 0, 0}; > uint16_t dequant[2][64]; > @@ -149,6 +149,10 @@ static int lead_decode_frame(AVCodecContext *avctx, > AVFrame * frame, > > format = AV_RL16(buf + 4); > switch(format) { > +case 0x0: > +yuv420p_div = 2; > +avctx->pix_fmt = AV_PIX_FMT_YUV420P; > +break; > case 0x8000: > yuv20p_half = 1; > // fall-through > @@ -192,29 +196,39 @@ static int lead_decode_frame(AVCodecContext *avctx, > AVFrame * frame, > init_get_bits8(&gb, s->bitstream_buf, size); > > if (avctx->pix_fmt == AV_PIX_FMT_YUV420P) { > -for (int mb_y = 0; mb_y < (avctx->height + 15) / 16; mb_y++) > +for (int mb_y = 0; mb_y < (avctx->height + (16 / yuv420p_div - 1)) / > (16 / yuv420p_div); mb_y++) > for (int mb_x = 0; mb_x < (avctx->width + 15) / 16; mb_x++) > -for (int b = 0; b < (yuv20p_half ? 4 : 6); b++) { > -int luma_block = yuv20p_half ? 2 : 4; > +for (int b = 0; b < ((yuv420p_div == 2 || yuv20p_half) ? 4 : > 6); b++) { > +int luma_block = (yuv420p_div == 2 || yuv20p_half) ? 2 : > 4; > const VLCElem * dc_vlc = b < luma_block ? > luma_dc_vlc.table : chroma_dc_vlc.table; > int dc_bits= b < luma_block ? LUMA_DC_BITS : > CHROMA_DC_BITS; > const VLCElem * ac_vlc = b < luma_block ? > luma_ac_vlc.table : chroma_ac_vlc.table; > int ac_bits= b < luma_block ? LUMA_AC_BITS : > CHROMA_AC_BITS; > -int plane = b < luma_block ? 0 : b - > (yuv20p_half ? 1 : 3); > -int x, y; > +int plane = b < luma_block ? 0 : b - > (luma_block - 1); > +int x, y, yclip; > > if (b < luma_block) { > -y = 16*mb_y + 8*(b >> 1); > +y = (16 / yuv420p_div)*mb_y + 8*(b >> 1); > x = 16*mb_x + 8*(b & 1); > +yclip = 0; > } else { > -y = 8*mb_y; > +y = (8 / yuv420p_div)*mb_y; > x = 8*mb_x; > +yclip = (yuv420p_div == 2) && y + 8 >= avctx->height > / 2; > } > > -ret = decode_block(s, &gb, dc_vlc, dc_bits, ac_vlc, > ac_bits, > -dc_pred + plane, dequant[!(b < 4)], > -frame->data[plane] + y*frame->linesize[plane] + x, > -(yuv20p_half && b < 2 ? 2 : 1) * > frame->linesize[plane]); > +if (!yclip) { > +ret = decode_block(s, &gb, dc_vlc, dc_bits, ac_vlc, > ac_bits, > +dc_pred + plane, dequant[!(b < 4)], > +frame->data[plane] + y*frame->linesize[plane] + > x, > +(yuv20p_half && b < 2 ? 2 : 1) * > frame->linesize[plane]); > +} else { > +uint8_t tmp[64]; > +ret = decode_block(s, &gb, dc_vlc, dc_bits, ac_vlc, > ac_bits, > +dc_pred + plane, dequant[!(b < 4)], tmp, 8); > +for (int yy = 0; yy < 8 && y + yy < avctx->height / > 2; yy++) > +memcpy(frame->data[plane] + > (y+yy)*frame->linesize[plane] + x, tmp + yy, 8); > +} > if (ret < 0) > return ret; > will apply in a few days. -- Peter (A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B) signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 3/3] avcodec/pngenc: write eXIf chunks
On 2/14/24 13:53, Andreas Rheinhardt wrote: Leo Izen: Write EXIF metadata exposed AV_FRAME_DATA_EXIF as an eXIf chunk to PNG files, if present. Signed-off-by: Leo Izen --- libavcodec/pngenc.c | 4 1 file changed, 4 insertions(+) diff --git a/libavcodec/pngenc.c b/libavcodec/pngenc.c index 50689cb50c..a302c879da 100644 --- a/libavcodec/pngenc.c +++ b/libavcodec/pngenc.c @@ -413,6 +413,10 @@ static int encode_headers(AVCodecContext *avctx, const AVFrame *pict) } } +side_data = av_frame_get_side_data(pict, AV_FRAME_DATA_EXIF); +if (side_data) +png_write_chunk(&s->bytestream, MKTAG('e', 'X', 'I', 'f'), side_data->data, FFMIN(side_data->size, INT_MAX)); + side_data = av_frame_get_side_data(pict, AV_FRAME_DATA_ICC_PROFILE); if ((ret = png_write_iccp(s, side_data))) return ret; If I see this correctly, then these patches can lead to a situation where an input packet has rotation metadata in exif which gets exported twice -- as displaymatrix and as exif metadata side data. If the user changes the displaymatrix (e.g. applies the transformation to the image data and removes the displaymatrix side data before reencoding), the exif data (that the user would probably not be aware of) would still be there and get propagated into the output, corrupting it. Hm. This is true, but it's also currently how the mjpeg decoder functions (i.e. it attaches displaymatrix side data). There are no encoders that currently save EXIF metadata, so there's no precedent. How do you suggest this be reconciled? - Leo Izen (Traneptora) OpenPGP_signature.asc Description: OpenPGP digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] lead: support unaligned blocks
On Sun, Nov 12, 2023 at 11:40:26AM +1100, Peter Ross wrote: > Fixed ticket #10656. > --- > libavcodec/leaddec.c | 8 > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/libavcodec/leaddec.c b/libavcodec/leaddec.c > index ede52fba5a..fd2018256d 100644 > --- a/libavcodec/leaddec.c > +++ b/libavcodec/leaddec.c > @@ -192,8 +192,8 @@ static int lead_decode_frame(AVCodecContext *avctx, > AVFrame * frame, > init_get_bits8(&gb, s->bitstream_buf, size); > > if (avctx->pix_fmt == AV_PIX_FMT_YUV420P) { > -for (int mb_y = 0; mb_y < avctx->height / 16; mb_y++) > -for (int mb_x = 0; mb_x < avctx->width / 16; mb_x++) > +for (int mb_y = 0; mb_y < (avctx->height + 15) / 16; mb_y++) > +for (int mb_x = 0; mb_x < (avctx->width + 15) / 16; mb_x++) > for (int b = 0; b < (yuv20p_half ? 4 : 6); b++) { > int luma_block = yuv20p_half ? 2 : 4; > const VLCElem * dc_vlc = b < luma_block ? > luma_dc_vlc.table : chroma_dc_vlc.table; > @@ -225,8 +225,8 @@ static int lead_decode_frame(AVCodecContext *avctx, > AVFrame * frame, > } > } else { > for (int f = 0; f < fields; f++) > -for (int j = 0; j < avctx->height / fields / 8; j++) > -for (int i = 0; i < avctx->width / 8; i++) > +for (int j = 0; j < (avctx->height + 7) / fields / 8; j++) > +for (int i = 0; i < (avctx->width + 7) / 8; i++) > for (int plane = 0; plane < 3; plane++) { > const VLCElem * dc_vlc = !plane ? luma_dc_vlc.table > : chroma_dc_vlc.table; > int dc_bits= !plane ? LUMA_DC_BITS : > CHROMA_DC_BITS; > -- > 2.42.0 clearing out my patch queue. posted to ml nov 2023. will apply in a few days. -- Peter (A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B) signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".