[FFmpeg-user] FFMPEG filter_complex overlays work well on windows but generates error on Linux and OSX

2023-12-22 Thread Suraj Kadam
So, I am applying multiple overlays on a video using FFMPEG, on windows it
always works fine for every size and number of overlays, but in case of MAC
or Linux, it generates error for reinitializing filters.

Let's say I have a video of 190MB, and about 182 overlay PNGs which will be
overlayed using filter_complex.

   for (let i = 0; i < pngPaths.length; i++) {
const start = Number(transcriptions[i].start).toFixed(2);
const end = Number(transcriptions[i].end).toFixed(2);
const overlayX = '0';
const overlayY = `0`;
filterComplex += `[${
  i + 1
}:v] overlay=${overlayX}:${overlayY}:enable='between(t,${start},${end})'`;
if (i < pngPaths.length - 1) {
  filterComplex += '[vout];[vout]';
}
  }

I am creating a filter_complex using above function.

And then running FFMPEG command like this:

  const outputPath =
  path.join(__dirname, '..') +
  `/projects/${project.originalVideoFile}-output.mp4`;
this.appGateway.sendProgress(JSON.stringify(userId), 'progress', 0);
return new Promise((resolve, reject) => {
  const ffmpegCommand = ffmpeg();

  // Add video input
  ffmpegCommand.input(videoPath);

  // Add PNG inputs
  for (const pngPath of pngPaths) {
ffmpegCommand.input(pngPath);
  }
  if (project?.user?.subscription?.plan?.id === 1) {
ffmpegCommand.input('watermark.png');
  }
  ffmpegCommand
.complexFilter(filterComplex)
.outputOptions('-c:v', 'libx264')
.output(outputPath)
.on('progress', (progress) => {
  this.appGateway.sendProgress(
JSON.stringify(userId),
'progress',
Math.round(progress.percent),
  );
})
.on('end', async () => {


"You can ignore the socket emmisions"

This works will on windows. But when processing on OSX or Linux, it
gives the following error:

Stream #95:0 (png) -> overlay (graph 0)
  Stream #96:0 (png) -> overlay (graph 0)
  Stream #97:0 (png) -> overlay (graph 0)
  Stream #98:0 (png) -> overlay (graph 0)
  Stream #99:0 (png) -> overlay (graph 0)
  Stream #100:0 (png) -> overlay (graph 0)
  Stream #101:0 (png) -> overlay (graph 0)
  Stream #102:0 (png) -> overlay (graph 0)
  Stream #103:0 (png) -> overlay (graph 0)
  Stream #104:0 (png) -> overlay (graph 0)
  Stream #105:0 (png) -> overlay (graph 0)
  Stream #106:0 (png) -> overlay (graph 0)
  Stream #107:0 (png) -> overlay (graph 0)
  Stream #108:0 (png) -> overlay (graph 0)
  Stream #109:0 (png) -> overlay (graph 0)
  Stream #110:0 (png) -> overlay (graph 0)
  Stream #111:0 (png) -> overlay (graph 0)
  Stream #112:0 (png) -> overlay (graph 0)
  Stream #113:0 (png) -> overlay (graph 0)
  Stream #114:0 (png) -> overlay (graph 0)
  Stream #115:0 (png) -> overlay (graph 0)
  Stream #116:0 (png) -> overlay (graph 0)
  Stream #117:0 (png) -> overlay (graph 0)
  Stream #118:0 (png) -> overlay (graph 0)
  Stream #119:0 (png) -> overlay (graph 0)
  Stream #120:0 (png) -> overlay (graph 0)
  Stream #121:0 (png) -> overlay (graph 0)
  Stream #122:0 (png) -> overlay (graph 0)
  Stream #123:0 (png) -> overlay (graph 0)
  Stream #124:0 (png) -> overlay (graph 0)
  Stream #125:0 (png) -> overlay (graph 0)
  Stream #126:0 (png) -> overlay (graph 0)
  Stream #127:0 (png) -> overlay (graph 0)
  Stream #128:0 (png) -> overlay (graph 0)
  Stream #129:0 (png) -> overlay (graph 0)
  Stream #130:0 (png) -> overlay (graph 0)
  Stream #131:0 (png) -> overlay (graph 0)
  Stream #132:0 (png) -> overlay (graph 0)
  Stream #133:0 (png) -> overlay (graph 0)
  Stream #134:0 (png) -> overlay (graph 0)
  Stream #135:0 (png) -> overlay (graph 0)
  Stream #136:0 (png) -> overlay (graph 0)
  Stream #137:0 (png) -> overlay (graph 0)
  Stream #138:0 (png) -> overlay (graph 0)
  Stream #139:0 (png) -> overlay (graph 0)
  Stream #140:0 (png) -> overlay (graph 0)
  Stream #141:0 (png) -> overlay (graph 0)
  Stream #142:0 (png) -> overlay (graph 0)
  Stream #143:0 (png) -> overlay (graph 0)
  Stream #144:0 (png) -> overlay (graph 0)
  Stream #145:0 (png) -> overlay (graph 0)
  Stream #146:0 (png) -> overlay (graph 0)
  Stream #147:0 (png) -> overlay (graph 0)
  Stream #148:0 (png) -> overlay (graph 0)
  Stream #149:0 (png) -> overlay (graph 0)
  Stream #150:0 (png) -> overlay (graph 0)
  Stream #151:0 (png) -> overlay (graph 0)
  Stream #152:0 (png) -> overlay (graph 0)
  Stream #153:0 (png) -> overlay (graph 0)
  Stream #154:0 (png) -> overlay (graph 0)
  Stream #155:0 (png) -> overlay (graph 0)
  Stream #156:0 (png) -> overlay (graph 0)
  Stream #157:0 (png) -> overlay (graph 0)
  Stream #158:0 (png) -> overlay (graph 0)
  Stream #159:0 (png) -> overlay (graph 0)
  Stream #160:0 (png) -> overlay (graph 0)
  Stream #161:0 (png) -> overlay (graph 0)
  Stream #162:0 (png) -> overlay (graph 0)
  Stream #163:0 (png) -> overlay (graph 0)
  Stream #164:0 (png) -> overlay (graph 0)
  Stream #165:0 (png) -> overlay (graph 0)
  Stream #166:0 (png) -> overlay (graph 0)
  Stream #167:0 (png) -> overlay (graph 0)
  Stream #168:0 (png) -> overlay (graph 0)
  Stream #169:0 (png) -> 

[FFmpeg-user] Error with FFmpeg and Node Canvas for SubtitleO

2023-09-02 Thread Suraj Kadam
We have a subtitles automation tool, made on ReactJS on FE. and NestJS on
BE.

So the flow is that on the frontend first we show the user a preview of the
subtitles, and give him options to modify the styling, the position of the
subtitles or the subtitle container, and on the preview everything is done
using CSS and JS.

We are looking for a sync up between the values from the frontend to apply
accurately on the backend.

Our current approach is by generating PNG images using node-canvas and then
streaming them out on the video at particular timestamps using
overlay_filter.

The first concern is of the scaling or the dimensions, as on the frontend
we are using VideoJS, and of course for better view we are resizing the
container as per devices etc, so the video's width & height on the frontend
is less (mostly) than the actual dimensions of the video.

While using that dimensions, the subtitles are too short to be rendered on
the video (smaller) as the width and height of the canvas gets shorter.

And also for the position, we are using react-draggble library for making
the subtitles draggable and place them on video In the preview, still we
lack on syncing up both the positions on the backend.

We are just looking for a proper roadmap or solution for these blockers.

Would appreciate your help on this.

Thanks.

Here's the current implementation of the code:

wrapText(context, text, x, y, maxWidth, lineHeight) {
const words = text.split(' ');
let line = '';

for (let n = 0; n < words.length; n++) {
  const testLine = line + words[n] + ' ';
  const metrics = context.measureText(testLine);
  const testWidth = metrics.width;
  if (testWidth > maxWidth && n > 0) {
context.fillText(line, x, y);
line = words[n] + ' ';
y += lineHeight;
  } else {
line = testLine;
  }
}
context.fillText(line, x, y);
  }
  generateSubtitlePNG(transcription: any, outputPath: string, w, h) {
const canvas = createCanvas(720, 200);
const ctx = canvas.getContext('2d');

ctx.fillStyle = 'rgba(0, 0, 0, 0)';
ctx.fillRect(0, 0, canvas.width, canvas.height);

const fontSize = 41;
ctx.font = `bold ${fontSize}px Arial`;
ctx.fillStyle = 'white';
ctx.textAlign = 'center';

const maxWidth = canvas.width; // 20 pixels padding on each side
const lineHeight = fontSize * 1.4; // Adjust as needed
const x = canvas.width / 2;
const y = (canvas.height - lineHeight) / 2 + fontSize / 2; // Adjusted
for font height

this.wrapText(ctx, transcription.text, x, y, maxWidth, lineHeight);

const buffer = canvas.toBuffer('image/png');
fs.writeFileSync(outputPath, buffer);
  }
  async applySubtitlesNew(
addSubtitlesDto: AddSubtitlestDto,
userId: number,
project: any,
  ) {
if (userId !== project.user.id) {
  throw new UnauthorizedException();
}

const videoPath = `uploads/${project.originalVideoFile}`;

const transcriptions = JSON.parse(addSubtitlesDto.transcriptions);
const pngPaths = [];
const dimensions = await this.getVideoDimensions(videoPath);
console.log(dimensions);
// Generate PNGs for each transcription
for (let i = 0; i < transcriptions.length; i++) {
  const outputPath =
`subtitles/${project.originalVideoFile}-subtitle-${i}.png`;
  this.generateSubtitlePNG(
transcriptions[i],
outputPath,
dimensions?.width,
dimensions?.height,
  );
  pngPaths.push(outputPath);
}
let filterComplex = '[0:v]';
for (let i = 0; i < pngPaths.length; i++) {
  const start = transcriptions[i].start.toFixed(2);
  const end = transcriptions[i].end.toFixed(2);
  const overlayX = '(W-w)/2';
  const overlayY = `H-h-10`;
  filterComplex += `[${
i + 1
  }:v]
overlay=${overlayX}:${overlayY}:enable='between(t,${start},${end})'`;
  if (i < pngPaths.length - 1) {
filterComplex += '[vout];[vout]';
  }
}

console.log(filterComplex);
// Run FFmpeg
return new Promise((resolve, reject) => {
  const ffmpegCommand = ffmpeg();

  // Add video input
  ffmpegCommand.input(videoPath);

  // Add PNG inputs
  for (const pngPath of pngPaths) {
ffmpegCommand.input(pngPath);
  }

  ffmpegCommand
.complexFilter(filterComplex)
.outputOptions('-c:v', 'libx264')
.output('output.mp4')
.on('end', () => {
  // Handle completion
  // Clean up PNG files
  for (const pngPath of pngPaths) {
fs.unlinkSync(pngPath);
  }
  resolve('done');
})
.on('error', (error, stdout, stderr) => {
  console.log(stderr);
  // Handle error
  reject(error);
})
.run();
});
  }
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above,