Re: [FFmpeg-devel] [PATCH 2/2] avfilter/dnn_processing: Add TensorRT backend

2021-08-20 Thread Xiaowei Wang

On 2021/7/25 21:04, James Almer wrote:

External email: Use caution opening links or attachments


On 7/25/2021 8:58 AM, Xiaowei Wang wrote:

The backend can be called as:
-vf dnn_processing=dnn_backend=tensorrt:model="model":input=:output=

As TensorRT provides C++ API rather than C, the TensorRT 
implementation is

separated into a wrapper.
The wrapper is placed inhttps://github.com/DutchPiPi/nv-tensorrt-wrapper
Please build & install the wrapper before compiling ffmpeg.
Please seehttps://github.com/DutchPiPi/FFmpeg-trt-backend-test  for 
how to

configure ffmpeg and generate a TensorRT engine for tests.

Signed-off-by: Xiaowei Wang
---
  libavfilter/dnn/Makefile   |   2 +-
  libavfilter/dnn/dnn_backend_tensorrt.c |  97 +++-
  libavfilter/dnn/dnn_backend_tensorrt.h |  40 +-
  libavfilter/dnn/dnn_io_proc_trt.cu |  55 --
  libavfilter/dnn/trt_class_wrapper.cpp  | 731 -
  libavfilter/dnn/trt_class_wrapper.h    |  49 --
  6 files changed, 109 insertions(+), 865 deletions(-)
  delete mode 100644 libavfilter/dnn/dnn_io_proc_trt.cu
  delete mode 100644 libavfilter/dnn/trt_class_wrapper.cpp
  delete mode 100644 libavfilter/dnn/trt_class_wrapper.h

diff --git a/libavfilter/dnn/Makefile b/libavfilter/dnn/Makefile
index f9ea7ca386..4661d3b2cb 100644
--- a/libavfilter/dnn/Makefile
+++ b/libavfilter/dnn/Makefile
@@ -16,6 +16,6 @@ OBJS-$(CONFIG_DNN)   += 
dnn/dnn_backend_native_layer_mat


  DNN-OBJS-$(CONFIG_LIBTENSORFLOW) += dnn/dnn_backend_tf.o
  DNN-OBJS-$(CONFIG_LIBOPENVINO)   += 
dnn/dnn_backend_openvino.o
-DNN-OBJS-$(CONFIG_LIBTENSORRT)   += 
dnn/dnn_backend_tensorrt.o dnn/trt_class_wrapper.o 
dnn/dnn_io_proc_trt.ptx.o
+DNN-OBJS-$(CONFIG_LIBTENSORRT)   += 
dnn/dnn_backend_tensorrt.o


  OBJS-$(CONFIG_DNN)   += $(DNN-OBJS-yes)
diff --git a/libavfilter/dnn/dnn_backend_tensorrt.c 
b/libavfilter/dnn/dnn_backend_tensorrt.c

index b45b770a77..e50ebc6c99 100644
--- a/libavfilter/dnn/dnn_backend_tensorrt.c
+++ b/libavfilter/dnn/dnn_backend_tensorrt.c
@@ -25,45 +25,119 @@
   * DNN TensorRT backend implementation.
   */

-#include "trt_class_wrapper.h"
  #include "dnn_backend_tensorrt.h"

-#include "libavutil/mem.h"
  #include "libavformat/avio.h"
+#include "libavutil/mem.h"
  #include "libavutil/avassert.h"
  #include "libavutil/opt.h"
  #include "libavutil/avstring.h"
+#include "libavutil/buffer.h"
+#include "libavutil/pixfmt.h"
+#include "libavutil/pixdesc.h"
+
  #include "dnn_io_proc.h"
  #include "../internal.h"
-#include "libavutil/buffer.h"
+#include "trt_class_wrapper.h"
+
+#include 
+#include 
+#include 
  #include 

  #define OFFSET(x) offsetof(TRTContext, x)
  #define FLAGS AV_OPT_FLAG_FILTERING_PARAM
  static const AVOption dnn_tensorrt_options[] = {
-    { "device", "index of the GPU to run model", 
OFFSET(options.device), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, 
FLAGS },
+    { "device", "index of the GPU to run model", 
OFFSET(options.device),    AV_OPT_TYPE_INT,    { .i64 = 0 }, 0, 
INT_MAX, FLAGS },
+    { "plugin", "path to the plugin so", 
OFFSET(options.plugin_so), AV_OPT_TYPE_STRING, { .str = NULL}, 0, 
0, FLAGS },

  { NULL }
  };
  AVFILTER_DEFINE_CLASS(dnn_tensorrt);

-DNNModel *ff_dnn_load_model_trt(const char 
*model_filename,DNNFunctionType func_type,

+static TRTWrapper *wrapper = NULL;
+
+static int load_trt_backend_lib(TRTWrapper *w, const char *so_path, 
int mode)

+{
+    w->so_handle = dlopen("libnvtensorrt.so", mode);


No, dlopen() is not allowed for this kind of thing. Linking must be
added at build time.

You for that matter apparently add support for build time linking in
patch 1, then attempt to remove it in this one, leaving cruft in the
configure script. Why?

Not getting responses so re-sending.

As TensorRT only provides C++ APIs, the implementation of the backend 
inevitably contains cpp code, like patch 1. After patch 1 is finished, I 
heard that it would be better to avoid submitting cpp code so I put the 
cpp code inside a C wrapper (libnvtensorrt.so). I found that ffmpeg uses 
dlopen() to call CUDA and codec sdk, and I thought that dlopen() might 
be a preferable way so I used dlopen() as well.


If dlopen() is not allowed, I can keep the cpp code in the wrapper but 
link it at build time. I will also update the configure scrip and change 
the dependency to libnvtensorrt rather than libnvinfer. (libnvinfer is 
part of TensorRT and libnvtensorrt is the C wrapper of my cpp code.)



+    if (!w->so_handle)
+    {
+    return AVERROR(EIO);
+    }
+
+    w->load_model_func = (tloadModelTrt*)dlsym(w->so_handle, 
"load_model_trt");
+    w->execute_model_func = 

[FFmpeg-devel] 回复: [PATCH 2/2] avfilter/dnn_processing: Add TensorRT backend

2021-08-17 Thread Xiaowei Wang


>No, dlopen() is not allowed for this kind of thing. Linking must be added at 
>build time.

>You for that matter apparently add support for build time linking in patch 1, 
>then attempt to remove it in this one, leaving cruft in the configure script. 
>Why?

Sorry for the late reply, outlook automatically put the mail in the junk box. 
As I replied earlier, TensorRT only provides C++ API, which means the filter 
will be implemented in C++, however, I was told that submitting C++ code is not 
a good idea, so I came up with this dlopen() idea, in this way, I can wrap C++ 
code into C interfaces and only submit C code.

If dlopen() is not allowed and submitting C++ code is fine, I will reorg the 
code and get back to what I did in patch 1. Is this OK?
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] 回复: [PATCH 2/2] avfilter/dnn_processing: Add TensorRT backend

2021-07-25 Thread Xiaowei Wang
The original idea was to submit cpp code directly into ffmpeg, however, after 
the patch was ready, I was told that this was not a good idea, so I wrapped the 
TensorRT cpp implementations in c interfaces 
(https://github.com/DutchPiPi/nv-tensorrt-wrapper) and removed the cpp file.

Thanks,
Xiaowei Wang

-邮件原件-
发件人: Jean-Baptiste Kempf  
发送时间: 2021年7月25日 20:58
收件人: Xiaowei Wang ; FFmpeg development discussions and 
patches 
主题: Re: [FFmpeg-devel] [PATCH 2/2] avfilter/dnn_processing: Add TensorRT backend

External email: Use caution opening links or attachments


On Sun, 25 Jul 2021, at 13:58, Xiaowei Wang wrote:
>  libavfilter/dnn/trt_class_wrapper.cpp  | 731 -

So, you add files in the first patch, and then delete it on the second one???

--
Jean-Baptiste Kempf -  President
+33 672 704 734
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 2/2] avfilter/dnn_processing: Add TensorRT backend

2021-07-25 Thread Xiaowei Wang
The backend can be called as:
-vf dnn_processing=dnn_backend=tensorrt:model="model":input=:output=

As TensorRT provides C++ API rather than C, the TensorRT implementation is
separated into a wrapper.
The wrapper is placed in https://github.com/DutchPiPi/nv-tensorrt-wrapper
Please build & install the wrapper before compiling ffmpeg.
Please see https://github.com/DutchPiPi/FFmpeg-trt-backend-test for how to
configure ffmpeg and generate a TensorRT engine for tests.

Signed-off-by: Xiaowei Wang 
---
 libavfilter/dnn/Makefile   |   2 +-
 libavfilter/dnn/dnn_backend_tensorrt.c |  97 +++-
 libavfilter/dnn/dnn_backend_tensorrt.h |  40 +-
 libavfilter/dnn/dnn_io_proc_trt.cu |  55 --
 libavfilter/dnn/trt_class_wrapper.cpp  | 731 -
 libavfilter/dnn/trt_class_wrapper.h|  49 --
 6 files changed, 109 insertions(+), 865 deletions(-)
 delete mode 100644 libavfilter/dnn/dnn_io_proc_trt.cu
 delete mode 100644 libavfilter/dnn/trt_class_wrapper.cpp
 delete mode 100644 libavfilter/dnn/trt_class_wrapper.h

diff --git a/libavfilter/dnn/Makefile b/libavfilter/dnn/Makefile
index f9ea7ca386..4661d3b2cb 100644
--- a/libavfilter/dnn/Makefile
+++ b/libavfilter/dnn/Makefile
@@ -16,6 +16,6 @@ OBJS-$(CONFIG_DNN)   += 
dnn/dnn_backend_native_layer_mat
 
 DNN-OBJS-$(CONFIG_LIBTENSORFLOW) += dnn/dnn_backend_tf.o
 DNN-OBJS-$(CONFIG_LIBOPENVINO)   += dnn/dnn_backend_openvino.o
-DNN-OBJS-$(CONFIG_LIBTENSORRT)   += dnn/dnn_backend_tensorrt.o 
dnn/trt_class_wrapper.o dnn/dnn_io_proc_trt.ptx.o
+DNN-OBJS-$(CONFIG_LIBTENSORRT)   += dnn/dnn_backend_tensorrt.o
 
 OBJS-$(CONFIG_DNN)   += $(DNN-OBJS-yes)
diff --git a/libavfilter/dnn/dnn_backend_tensorrt.c 
b/libavfilter/dnn/dnn_backend_tensorrt.c
index b45b770a77..e50ebc6c99 100644
--- a/libavfilter/dnn/dnn_backend_tensorrt.c
+++ b/libavfilter/dnn/dnn_backend_tensorrt.c
@@ -25,45 +25,119 @@
  * DNN TensorRT backend implementation.
  */
 
-#include "trt_class_wrapper.h"
 #include "dnn_backend_tensorrt.h"
 
-#include "libavutil/mem.h"
 #include "libavformat/avio.h"
+#include "libavutil/mem.h"
 #include "libavutil/avassert.h"
 #include "libavutil/opt.h"
 #include "libavutil/avstring.h"
+#include "libavutil/buffer.h"
+#include "libavutil/pixfmt.h"
+#include "libavutil/pixdesc.h"
+
 #include "dnn_io_proc.h"
 #include "../internal.h"
-#include "libavutil/buffer.h"
+#include "trt_class_wrapper.h"
+
+#include 
+#include 
+#include 
 #include 
 
 #define OFFSET(x) offsetof(TRTContext, x)
 #define FLAGS AV_OPT_FLAG_FILTERING_PARAM
 static const AVOption dnn_tensorrt_options[] = {
-{ "device", "index of the GPU to run model", OFFSET(options.device), 
AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, FLAGS },
+{ "device", "index of the GPU to run model", OFFSET(options.device),
AV_OPT_TYPE_INT,{ .i64 = 0 }, 0, INT_MAX, FLAGS },
+{ "plugin", "path to the plugin so", OFFSET(options.plugin_so), 
AV_OPT_TYPE_STRING, { .str = NULL}, 0, 0, FLAGS },
 { NULL }
 };
 AVFILTER_DEFINE_CLASS(dnn_tensorrt);
 
-DNNModel *ff_dnn_load_model_trt(const char *model_filename,DNNFunctionType 
func_type, 
+static TRTWrapper *wrapper = NULL;
+
+static int load_trt_backend_lib(TRTWrapper *w, const char *so_path, int mode)
+{
+w->so_handle = dlopen("libnvtensorrt.so", mode);
+if (!w->so_handle)
+{
+return AVERROR(EIO);
+}
+
+w->load_model_func = (tloadModelTrt*)dlsym(w->so_handle, "load_model_trt");
+w->execute_model_func = (texecuteModelTrt*)dlsym(w->so_handle, 
"execute_model_trt");
+w->free_model_func = (tfreeModelTrt*)dlsym(w->so_handle, "free_model_trt");
+if (!w->load_model_func || !w->execute_model_func || !w->free_model_func)
+{
+return AVERROR(EIO);
+}
+
+return 0;
+}
+
+DNNModel *ff_dnn_load_model_trt(const char *model_filename,DNNFunctionType 
func_type,
 const char *options, AVFilterContext 
*filter_ctx)
 {
+char id_buf[64];
+AVBufferRef *device_ref = NULL;
+TRTContext *ctx = (TRTContext*)av_mallocz(sizeof(TRTContext));
+
+int ret = 0;
+
 DNNModel *model = NULL;
 model = (DNNModel*)av_mallocz(sizeof(DNNModel));
 if (!model){
 return NULL;
 }
+wrapper = av_mallocz(sizeof(TRTWrapper));
+wrapper->ctx = ctx;
+if (load_trt_backend_lib(wrapper, "libnvtensorrt.so", RTLD_LAZY) != 0)
+{
+av_log(ctx, AV_LOG_ERROR, "Cannot load wrapper functions. Please check 
if libnvtensorrt.so is installed\n");
+return NULL;
+}
+ctx->av_class = _tensorrt_class;
+av_opt_set_default

[FFmpeg-devel] [PATCH 1/2] avfilter/dnn/dnn_backend_trt: Update with master and sign-off

2021-07-25 Thread Xiaowei Wang
Signed-off-by: Xiaowei Wang 
---
 configure  |   6 +-
 libavfilter/dnn/Makefile   |   1 +
 libavfilter/dnn/dnn_backend_tensorrt.c |  77 +++
 libavfilter/dnn/dnn_backend_tensorrt.h |  72 +++
 libavfilter/dnn/dnn_interface.c|  10 +
 libavfilter/dnn/dnn_io_proc_trt.cu |  55 ++
 libavfilter/dnn/trt_class_wrapper.cpp  | 731 +
 libavfilter/dnn/trt_class_wrapper.h|  49 ++
 libavfilter/dnn_interface.h|   2 +-
 libavfilter/vf_dnn_processing.c|   3 +
 10 files changed, 1004 insertions(+), 2 deletions(-)
 create mode 100644 libavfilter/dnn/dnn_backend_tensorrt.c
 create mode 100644 libavfilter/dnn/dnn_backend_tensorrt.h
 create mode 100644 libavfilter/dnn/dnn_io_proc_trt.cu
 create mode 100644 libavfilter/dnn/trt_class_wrapper.cpp
 create mode 100644 libavfilter/dnn/trt_class_wrapper.h

diff --git a/configure b/configure
index b124411609..e496a66621 100755
--- a/configure
+++ b/configure
@@ -272,6 +272,8 @@ External library support:
   --enable-libsvtav1   enable AV1 encoding via SVT [no]
   --enable-libtensorflow   enable TensorFlow as a DNN module backend
for DNN based filters like sr [no]
+  --enable-libtensorrt enable TensorRT as a DNN module backend
+   for DNN based filters like sr [no]
   --enable-libtesseractenable Tesseract, needed for ocr filter [no]
   --enable-libtheora   enable Theora encoding via libtheora [no]
   --enable-libtls  enable LibreSSL (via libtls), needed for https 
support
@@ -1839,6 +1841,7 @@ EXTERNAL_LIBRARY_LIST="
 libssh
 libsvtav1
 libtensorflow
+libtensorrt
 libtesseract
 libtheora
 libtwolame
@@ -2660,7 +2663,7 @@ cbs_mpeg2_select="cbs"
 cbs_vp9_select="cbs"
 dct_select="rdft"
 dirac_parse_select="golomb"
-dnn_suggest="libtensorflow libopenvino"
+dnn_suggest="libtensorflow libopenvino libtensorrt"
 dnn_deps="avformat swscale"
 error_resilience_select="me_cmp"
 faandct_deps="faan"
@@ -6487,6 +6490,7 @@ enabled libspeex  && require_pkg_config libspeex 
speex speex/speex.h spe
 enabled libsrt&& require_pkg_config libsrt "srt >= 1.3.0" 
srt/srt.h srt_socket
 enabled libsvtav1 && require_pkg_config libsvtav1 "SvtAv1Enc >= 0.8.4" 
EbSvtAv1Enc.h svt_av1_enc_init_handle
 enabled libtensorflow && require libtensorflow tensorflow/c/c_api.h 
TF_Version -ltensorflow
+enabled libtensorrt   && require_cpp libtensorrt NvInfer.h nvinfer1::Dims2 
-lnvinfer -lcudart
 enabled libtesseract  && require_pkg_config libtesseract tesseract 
tesseract/capi.h TessBaseAPICreate
 enabled libtheora && require libtheora theora/theoraenc.h th_info_init 
-ltheoraenc -ltheoradec -logg
 enabled libtls&& require_pkg_config libtls libtls tls.h 
tls_configure
diff --git a/libavfilter/dnn/Makefile b/libavfilter/dnn/Makefile
index 4cfbce0efc..f9ea7ca386 100644
--- a/libavfilter/dnn/Makefile
+++ b/libavfilter/dnn/Makefile
@@ -16,5 +16,6 @@ OBJS-$(CONFIG_DNN)   += 
dnn/dnn_backend_native_layer_mat
 
 DNN-OBJS-$(CONFIG_LIBTENSORFLOW) += dnn/dnn_backend_tf.o
 DNN-OBJS-$(CONFIG_LIBOPENVINO)   += dnn/dnn_backend_openvino.o
+DNN-OBJS-$(CONFIG_LIBTENSORRT)   += dnn/dnn_backend_tensorrt.o 
dnn/trt_class_wrapper.o dnn/dnn_io_proc_trt.ptx.o
 
 OBJS-$(CONFIG_DNN)   += $(DNN-OBJS-yes)
diff --git a/libavfilter/dnn/dnn_backend_tensorrt.c 
b/libavfilter/dnn/dnn_backend_tensorrt.c
new file mode 100644
index 00..b45b770a77
--- /dev/null
+++ b/libavfilter/dnn/dnn_backend_tensorrt.c
@@ -0,0 +1,77 @@
+/*
+* Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved.
+*
+* Permission is hereby granted, free of charge, to any person obtaining a
+* copy of this software and associated documentation files (the "Software"),
+* to deal in the Software without restriction, including without limitation
+* the rights to use, copy, modify, merge, publish, distribute, sublicense,
+* and/or sell copies of the Software, and to permit persons to whom the
+* Software is furnished to do so, subject to the following conditions:
+*
+* The above copyright notice and this permission notice shall be included in
+* all copies or substantial portions of the Software.
+*
+* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE US

[FFmpeg-devel] [PATCH 1/2] avfilter/dnn/dnn_backend_trt: Update with master and sign-off

2021-07-25 Thread Xiaowei Wang



0001-avfilter-dnn-dnn_backend_trt-Update-with-master-and-.patch
Description: 0001-avfilter-dnn-dnn_backend_trt-Update-with-master-and-.patch
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".