Thanks Randy. They normally would be. However, these are short highly repetitive tasks (think several times a day) that could easily be done procedurally in Nuke by just throwing a new “header", "body” and “tail” procedurally. Not for broadcast but annoyingly simple little web vids.
I originally was going to automate them in Nuke Studio but NS still creates just another Nuke script and calls FFmpeg(I believe) for the wave generation. The downside is that, unlike Nuke, it seems to need an interactive session as the master (the farm just points to the correct slaves) and NS doesn’t handle variables as well. There is a bit of manual relinking for each new version. In this particular case it just seemed more flexible to create a single “video widget" script that Nuke could procedurally create the new edit ( including controlling in and out points adjustments if needed) and could simply be run automatically once the assets are ready from the artists. I’d just need to execute the FFmpeg audio operations as a callback. It’s really just an experiment in automation at this point and not a serious workflow approach as it would obviously have huge drawbacks for all but the most specific of tasks. If you’ve got any more thoughts I’d love to hear them. Michael > On May 25, 2016, at 12:59 PM, Randy Little <randyslit...@gmail.com> wrote: > > buh why? Why aren't the shots just going back into an edit that already has > the sound? Why does the sound need to be in the MOV or whatever wrapper?. > > Randy S. Little > http://www.rslittle.com/ <http://www.rslittle.com/> > http://www.imdb.com/name/nm2325729/ <http://www.imdb.com/name/nm2325729/> > > > > On Wed, May 25, 2016 at 8:37 AM, Michael Hodges <mhod...@morganfalls.com > <mailto:mhod...@morganfalls.com>> wrote: > I’m creating a series of automation tasks using Nuke to append a series of > image sequence clips (along with some predetermined image manipulations) > whose sources may change depending on the artist's environment. > > These would automatically be written out on the farm and processed for review. > > The only issue is that some of the clips may have corresponding .wav audio > which can’t be written out within nuke for the final render. > > > My solution is to offset any pre-existing audio .wavs (located in the Read > paths with the same base name as the image sequences) by the corresponding > Append lastFrame count and then run a ffMpeg render-time callback to merge > those wavs (based on the offsets and descriptions generated by the script) to > a new file that is read by the farm > > > > Before I start from scratch I thought I’d see if there were any existing > FFmpeg nodes/gizmos/code out there that assists with FFmpeg manipulation or > writes within Nuke. I didn’t see anything on Nukepedia but I figured that it > would be good to check here > first._______________________________________________ > Nuke-users mailing list > Nuke-users@support.thefoundry.co.uk > <mailto:Nuke-users@support.thefoundry.co.uk>, http://forums.thefoundry.co.uk/ > <http://forums.thefoundry.co.uk/> > http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users > <http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users> > > _______________________________________________ > Nuke-users mailing list > Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/ > http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users
_______________________________________________ Nuke-users mailing list Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/ http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users