Dear everyone, I am finding myself increasingly using the DNN superscalers for textual and drawn images and videos. Thus far, I was content using the waifu2x image superscaler [1]. However, when processing videos, exploding them to frame images is space-consuming and superscaling them one by one is time-consuming. To eliminate the space consumption concern, I was looking for an ffmpeg video filter, which would save me the trip from video to frame images and back.
In the ffmpeg documentation, I discovered the existence of the sr video filter [2], which claims to be able to do just this. However, there seem to be no pre-trained DNN models in the filter repository [3] and the documentation for producing models is severely lacking: Dependencies for running the Python scripts are underspecified and the training scripts keep failing in random places due to missing Python packages and wrong version of Python. Is there any better documentation or, better yet, some pretrained models to get started? [1]: https://github.com/nagadomi/waifu2x [2]: https://ffmpeg.org/ffmpeg-filters.html#sr-1 [3]: https://github.com/XueweiMeng/sr Kind Regards, Vit Novotny
signature.asc
Description: PGP signature
_______________________________________________ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".