Re: nim-ffmpeg wrapper: how to continue?

2017-11-02 Thread mratsim
It might be easier to use Avisynth or Vapoursynth with MVTools directly:

  * [MVTools from Avisynth wiki](http://avisynth.nl/index.php/MVTools)
  * [MVTools technical page](http://avisynth.nl/index.php/MVTools)



Example to blur blocky area with occlusion mask 


AVISource("c:\test.avi") # or MPEG2Source, DirectShowSource, some previous 
filter, etc
super = MSuper()
vectors = MAnalyse(super, isb = false)
compensation = MCompensate(super,vectors) # or use MFlow function here
# prepare blurred frame with some strong blur or deblock function:
blurred = compensation.DeBlock(quant=51) # Use DeBlock function here
badmask = MMask(vectors, kind = 2, ml=50)
overlay(compensation,blurred,mask=badmask) # or use faster MaskedMerge 
function of MaskTools


This can be fed directly to mencoder or x264 compiled with avs (Avisynth 
script) support. MPV can read directly from avs/vapoursynth so you can have a 
"REPL" for video files with vapoursynth/avisynth.

And last, port to Vapoursynth:

  * [MVTool Vapoursynth](https://github.com/dubhater/vapoursynth-mvtools)




Re: Help with parallelizing a loop

2017-11-02 Thread niofis
I would suggest you check this page: [10 Nim One Liners to Impress Your 
Friends](https://blog.ubergarm.com/blog/archive/archive-ten-nim-one-liners.md), 
and also, check my own implementation of a parallel map with some improvements 
here: 
[pmap](https://github.com/niofis/light/blob/master/nim/src/lightpkg/pmap.nim), 
which uses spawn and divides the work into as many blocks as processors are 
available.


Help with parallelizing a loop

2017-11-02 Thread jackmott
I would like some guidance on how to best parallelize this loop. I've played 
around with the parallel: macro, and using spawn, but haven't gotten a working 
solution yet. It isn't entirely clear what the best approach should be:


for i,face in faces:  #face is a path to an image file
var width,height,channels:int
#loads and decodes jpgs/pngs from disk returning a seq[byte]
let data = stbi.load(face,width,height,channels,stbi.Default)
if data != nil and data.len != 0:
let target = (GL_TEXTURE_CUBE_MAP_POSITIVE_X.int+i).TexImageTarget
#sets up opengl texture
glTexImage2D(target.GLenum,
 level.GLint,
 internalFormat.GLint,
 width.GLsizei,
 height.GLsizei,
 0,
 format.GLenum,
 pixelType.GLenum,
 data[0].unsafeAddr)  #error on this when using 
parallel:
else:
echo "Failure to Load Cubemap Image"


When I try to just use the parallel: macro I get: Error: cannot prove: 0 <= 
len(data) + -1 (bounds check)

I also experimented with spawn but wasn't clear on how to use that well. 


Re: How do you keep your motivation on your open-source projects

2017-11-02 Thread Krux02
Techniques for dealing with lack of motivation, malaise, depression


Re: nim-ffmpeg wrapper: how to continue?

2017-11-02 Thread cybertreiber
Hi mratsim, The libraries provide good medium level access. AFAIK they don't 
expose motion vectors though. So I'd either extend and depend them or go bare 
metal. I opted for the second choice (to also learn nim) and replicate this as 
a starter: 
[https://github.com/vadimkantorov/mpegflow](https://github.com/vadimkantorov/mpegflow)

Do you think wrapping ffms2 will be less effort than ffmpeg directly?

I'm working on realtime object detection professionally. Motion vectors from 
(hardware) encoders allow us to better extract objects. FWIW, calling ffmpeg as 
a subprocess is very practical for almost all other manipulation/streaming 
tasks we need. Personally, I think video processing with Arraymancer makes a 
great showcase.


Re: nim-ffmpeg wrapper: how to continue?

2017-11-02 Thread mratsim
What will you use it for?

If it's to manipulate video data, preprocess, trim, filter it (convolution, 
denoising, etc), it might be much easier to use 
[ffms2](https://github.com/FFMS/ffms2)

Or maybe provides bindings to Avisynth (Windows-only) or 
[VapourSynth](https://github.com/vapoursynth/vapoursynth) (Python lib with C++ 
backend)

If it's to play the video, I can't help you, but there are plenty of opensource 
video players.

Note: I plan "some time"™ to add a video load/save to Arraymancer to easily 
use, filter, reencode videos in deep learning pipelines (for face/object 
detection in videos for example) so would be willing to contribute/kickstart 
FFMPEG/FFMS <-> Nim bindings.


Re: nim-ffmpeg wrapper: how to continue?

2017-11-02 Thread cybertreiber
It's perfectly fine to collate all definitions into a single type block. I 
already handle includes programmatically. What would be an idiomatic way to 
post-process .nim files? My naive approach would be to parse the AST, reorder 
it and dump it. However, I'm not sure if nim parses a file that's invalid to 
begin with.

Thanks for outlining the best practices Stefan. I think the gobject approach is 
also very neat.


Re: nim-ffmpeg wrapper: how to continue?

2017-11-02 Thread Stefan_Salewski
Yes, the reordering can be much manual work.

For the gintro module, I do it automatically now, but it takes a few minutes to 
finish -- and it is based on gobject-introspection.

The ugly way is to put all types into a single type section at the front of the 
module. Many modules from early days like the gtk2 ones do that. But of course 
that is mud mixing, the resulting module is very unfriendly for the human 
observer. Someone recommended in the past to ship one mudmixed module for the 
compiler and one virgin one for humans. You may do that.

Or, you may ask Araq for the pragma name which allows forward declarations. I 
tested it one year ago, it was working fine indeed. But was not really 
advertised or recommended. Or, you may wait -- future compiler versions may 
support forward declarations out of the box. 


nim-ffmpeg wrapper: how to continue?

2017-11-02 Thread cybertreiber
One use case of us requires interfacing ffmpeg. I used cpp to circumvent many 
of the c2nim limitations (e.g. macros within structs). However, I don't come to 
grips with type definitions that aren't declared top to bottom.

Specifically, I had to reorder many types in the avformat.nim file.

  * What would be the best way to sort c2nim's type definitions automatically 
(on a limited time budget)?
  * What would it entail to upgrade the c2nim parser?
  * Or am I missing some obvious options? AFAIK, nim doesn't support forward 
declarations for good reasons.



The diff between preprocessed .h files --> c2nim --> automatically generated 
.nim vs manually reordered nim files is on github. 
[https://github.com/ahirner/nim-ffmpeg/compare/auto_only?expand=1#diff-7dc1342505fab6283c80ade1f70d8c78](https://github.com/ahirner/nim-ffmpeg/compare/auto_only?expand=1#diff-7dc1342505fab6283c80ade1f70d8c78)

An early version seems to work though. I just started using nim and love it! 


Re: Send data structures between threads?

2017-11-02 Thread peheje
Thanks mratsim!