Hi Larry,

Actually I have been using oiiotool a lot already. And yet, somehow, I
overlooked one of the first options: "-a Do operations on all
subimages/miplevels"...

I just tried it and it didn't work in the use case I wanted it for first,
which was using --mosaic to stitch together tiled multi-layer renders.
(Perhaps I went through this last I tried and forgot, it was a few months
ago.) That operation doesn't seem to use the ImageRecRef trick you
describe...

https://github.com/OpenImageIO/oiio/blob/6fde4d0b7791f3de8e44ccec46ec3220ee2bf91d/src/oiiotool/oiiotool.cpp#L3243

The other similar use case would be the --over-ing multilayer renders,
where you'd want it to use the alpha of the main rgba layer, which could be
ambiguous?

I just gave that a quick try (with the -a argument) and it seemed to ignore
all the other layers... is there a trick to it?

Those are two two main operations that over come up so far: mosaic and
over. The other we've discussed, but hasn't yet been necessary, would be
stereo-joining of separate multilayer images. This would involve renaming
the subimages to prefix them with the view names (i.e. "depth" might become
"left.depth").

So having a way to do those things, using either oiiotool or the Python API
would help us use OIIO in a few more parts of the pipeline.

The reason for switching to Python was in a different context where needing
to apply operations to all subimages wasn't so important. One motivation I
had for switching to the Python API (other than that I was doing things in
Python anyway so calling the API directly is in some ways neater than
calling a subprocess) was for dealing with video, as you alluded to. I can
stream raw image data from and to ffmpeg via stdin and stdout and stick
that directly into an ImageBuf. It saves having to write out temporary
frames to disk. I don't think oiiotool can read/write from stdin/out? If it
could I could probably just connect ffmpeg to oiiotool with pipes.

I did try the ffmpeg OIIO plugin briefly, but it only supports input at the
moment, and isn't normally compiled into our local build. Since the
streaming trick was working, I stuck with that for now. One thing that
rubbed me the wrong way was how it treated frames as subimages... I feel
like frames in a video/sequence is an orthogonal concept to subimages,
which are usually stereo views or auxiliary layers. There are stereo videos
(I think encoding two interleaved videostreams?) which case you might want
to read/write a particular subimage of a particular frame. Perhaps the
concept of a frame could be a first-class concept in the API. But as you
say, it's another set of issues entirely, and adds another dimension of
complexity that might best live outside the scope of the API.

At any rate, the stuff I'm doing with video is working and sufficient for
now. It's with the multilayer sequences where we're currently using Nuke
instead, out of convenience.

Hope that helps explain where I'm at a little better.

Cheers

Steve
On Mar 16, 2016 2:56 PM, "Larry Gritz" <[email protected]> wrote:

> You are right, ImageBuf alone only handles a "single" image, like just one
> subimage or MIP level.
>
> This comes up with oiiotool, which does a lot of the things you mention.
> Internal to oiiotool, there is something called an ImageRec which
> encapsulates everything in an image file, which is conceptually a
> collection of subimages, and for each subimage either one image or all the
> MIP layers, each of which is an ImageBuf. So a lot of oiiotool ops are
> implemented rather like this:
>
>     ImageRecRef A = ot.curimg;
>     for (int s = 0; s < A->subimages(); ++s) {
>         for (int m = 0;  m < A->miplevels(s);  ++m) {
>             ImageSpec *spec = &(*A)(s,m).specmod();
>             // Do stuff with ImageBuf (*A)(s,m)...
>         }
>     }
>
> So one obvious question is, are you doing things the hard way in Python
> merely to reproduce what oiiotool can do for you succinctly already, and
> handling all the subimages?
>
> Of course, you could make such a loop in Python, as well, with some more
> housekeeping of the ImageBuf's. But if it's important that dealing with
> these multi-image files be done in Python or C++ and it's just too clunky,
> then perhaps we should consider taking ImageRec (or something like it,
> possibly with a better choice of name), decoupling it from oiiotool, and
> exposing it more widely. There's no real magic there, it's just a
> convenient wrapper for all the subimages (each an ImageBuf) that are in a
> file.
>
> I think that would pretty neatly solve the problems you have with layers,
> stereo, and so on. You mentioned in a thread on another mail list also
> wanting to work with video files. That may be another set of issues
> entirely.
>
> Maybe the best way to proceed is for you to give 3 or 4 concrete examples
> of things you want to do, and we can think about how it would most easily
> be done today with the various APIs we have, and that might lead naturally
> to discovering where the holes are that need to be addressed.
>
> -- lg
>
>
> On Mar 15, 2016, at 9:44 PM, Steve Agland <[email protected]> wrote:
>
> I've recently been looking into automating various simple image processing
> tasks using OpenImageIO via the Python bindings. E.g. format conversion,
> resizing, normalizing display/data windows, color space conversion, contact
> sheet generation, "over" comps, tile stitching...
>
> I've found the ImageBuf and ImageBufAlgo classes very convenient for
> expressing these operations clearly and concisely. But things got a messier
> when I wanted to work with multi-layer images (to use Nuke terminology). In
> particular, these are multi-subimage files where all the images are the
> same (display) resolution, but may have different numbers of channels or
> bit depths. It's common to get these out of a renderer now: the subimages
> might be stereo views, or AOVs (depth channels, mattes etc.) or a
> combination of both.
>
> I found myself often wanting to treat these as a single conceptual image,
> similar to the way most Nuke nodes can be set to apply to all layers. But
> ImageBuf requires you to specify a subimage to read. So the code complexity
> is increased as you need to cycle through each subimage process them
> separately, keep track of their names, bundle them all back together etc.
>
> As yet I haven't tried to implement anything along these lines for
> multi-layer images but we're considering doing it soon. I was wondering if
> there are any tricks to (optionally) hiding away the multi-subimage nature
> of images when working with ImageBufs and ImageBufAlgo operations, similar
> to the way many of these operations work independent of the
> number/name/type of channels. Or if not, do you think he API could be
> extended in an elegant way to allow that?
>
> This isn't so much a "can it be done?" question, as I'm sure the
> capability is there. It's more, can it be done succinctly? This could help
> us using the OIIO python bindings to do implement small automated image
> processing events in the pipeline in a low-overhead way.
>
> One small disclaimer, I'm working with a local branch of OIIO which is -
> for now - a year or so out of date, but I have been referring to the latest
> public documentation.
>
> Cheers
>
> Steve
> _______________________________________________
> Oiio-dev mailing list
> [email protected]
> http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
>
>
> --
> Larry Gritz
> [email protected]
>
>
>
> _______________________________________________
> Oiio-dev mailing list
> [email protected]
> http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
>
>
_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to