Em sex, 26 de abr de 2019 às 02:41, Guo, Yejun <yejun....@intel.com> escreveu: > > > > > -----Original Message----- > > From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of > > Guo, Yejun > > Sent: Friday, April 19, 2019 11:22 PM > > To: FFmpeg development discussions and patches <ffmpeg-devel@ffmpeg.org> > > Subject: Re: [FFmpeg-devel] native mode in FFmpeg DNN module > > > > > > > > > -----Original Message----- > > > From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of > > > Pedro Arthur > > > Sent: Friday, April 19, 2019 10:43 PM > > > To: FFmpeg development discussions and patches > > <ffmpeg-devel@ffmpeg.org> > > > Subject: Re: [FFmpeg-devel] native mode in FFmpeg DNN module > > > > > > Hi, > > > > > > Em sex, 19 de abr de 2019 às 05:41, Guo, Yejun <yejun....@intel.com> > > > escreveu: > > > > > > > > Option 2) > > > > Write c code in FFmpeg to convert tensorflow file format (format 1) > > > > directly > > > into memory representation (format 3), and so we controls everything in > > > ffmpeg community. And the conversion can be extended to import more file > > > formats such as torch, darknet, etc. One example is that OpenCV uses this > > > method. > > > > > > > > The in memory representation (format 3) can still be current. > > > > > > > > > > Option 2 would be ideal, as it does not introduce any dependency for > > > using the native backend. > > > Yet I'm not sure how complex implementing the tf model reader can be, > > > If I remember correctly the student said it was not trivial at the > > > time. > > > > yes, it is not easy, but I think it is worthy to do it. Here is a reference > > example > > for the complexity, see > > https://github.com/opencv/opencv/blob/master/modules/dnn/src/tensorflow/ > > tf_importer.cpp. > > > > > > > > Is the tf model file stable? if not it will be a maintenance burden to > > > keep it working whenever tf releases a new version. This point makes > > > me think having control over our file format is good. > > > > imho, this issue is always there, no matter which method used, unless our > > format could be exported by tensorflow (it has little possibility). > > > > Whenever tf releases a new version with a new file format, we still have to > > change the python script in phase 1 (convert tf file model to our format) > > which > > is even an external dependency at > > https://github.com/HighVoltageRocknRoll/sr, > > > > As from effort perspective, the current implementation is better since > > python > > script is simpler. But I think we are still worth implementing option 2 as > > the > > ideal technical direction. > > I checked a bit more about https://github.com/HighVoltageRocknRoll/sr, it is > actually > not an converter (from tf model to native model), but hard code for given > models. > And the native model is not exactly the same as tf model, it even changes the > behavior > of pad parameter of conv layer. > > If community is open to option 2, I'll try it. > Option 2 is fine for me. _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".