FFmpeg give you access to low-level buffers which stores the image data.
You need to investigate the storage type and do your needs yourself.

In case of RGB (FFmpeg supports a lot of pixel formats! but you can use
swsscale to convert all to RGB)
its like this:

// this is how struct defined in FreeImage library - not best way!
// (without CPU endianless)
typedef struct tagRGBTRIPLE
{
  BYTE rgbtRed;
  BYTE rgbtGreen;
  BYTE rgbtBlue;
} RGBTRIPLE;

RGBTRIPLE* changePixelRGB(AVFrame* theFrameRGB,
                          const size_t theRow,
                          const size_t theColumn)
{
  return (RGBTRIPLE* )&theFrameRGB->data[0][theRow *
theFrameRGB->linesize[0] + theColumn * 3];
}

// the pointer for RGB frame with PIX_FMT_RGB24 pixel format
// that you give from somewhere - from decoded video packet(s)
// or from swscale conversion
AVFrame* aFrameRGB;

// change the pixel from 100th row and 100th column
RGBTRIPLE* aPixel = changePixelRGB(aFrameRGB, 100, 100);
// set color to red
aPixel->rgbtRed = 255;
aPixel->rgbtGreen = 0;
aPixel->rgbtBlue = 0;
_______________________________________________
libav-user mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/libav-user

Reply via email to