On 1/25/2024 2:13 PM, Anton Khirnov wrote:
Quoting James Almer (2024-01-22 12:59:52)
I don't see how is that supposed to work. E.g. consider the following
partitioning:
┌─┬────┬─┐
│ │ ├─┤
├─┤ │ │
│ ├────┤ │
└─┴────┴─┘
How would you represent it in this API?
That's two rows and three columns. Lets assume the smallest rectangle
there is 1x1:
1x2 2x3 1x1
1x2 2x1 1x3
Meaning
tile_width[] = { 1, 2, 1, 1, 2, 1 };
tile_height[] = { 2, 3, 1, 2, 1, 3 };
As long as the sum of widths on every row and the sum of heights on
every column is the same (To ensure you get a rectangle), it can be
represented.
If what you're trying to say is "What about offsets?", they can be
inferred based on dimensions and position of previous tiles within the grid.
I don't think adding yet another array to store offsets is worth it. But
maybe a helper function that will build it?
This seems horribly obfuscated to me. Why do the users of this API have
to deal with all this? Why are the notions of "tile rows", "tile
columns", and "tiles" necessary?
It aligns with how HEIF (and even bitstream codecs like AV1 internally)
define a grid of images put together. Since this will be mainly used for
the former, it seems like the best way to propagate such information.
Strictly speaking, the "tile" dimensions are available as part of each
stream in the group, so now that i made the struct not be generic for
lavu, i can remove the arrays. But how to place the images in a grid
still requires the concept of rows and columns.
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".