Hi Larry,

Thank you, that is a wealth of information that I didn't quite know how to
discover previously.  I will try all of your suggestions when I can and see
which one works best. The Python approach was fine when I tried it... the
fact that it was fairly slow was OK considering we're not currently
processing images of a massive size frequently, although that can't be
ruled out in the future.

Our target renderer is Houdini's Mantra, which needs RAT format to render
efficiently:  the workflow currently is raw instrument data -> geotiff ->
color transformed  tif via oiio -> rat with Houdini's tools.  Even if OIIO
is smooth I think we've run into bugs with Houdini's image cli tools which
may not have been designed for files this big.  I'm filing bugs about it
when I get a chance.  In any case I'd like to get a OIIO RAT plugin
implemented at AMNH where I am currently (I have no doubt other facilites
have done so already).

Cheers,
Jon

The end result at the moment are Houdini RAT files... I've considered
spearheading development on a OIIO plugin for RAT

On Sun, Jul 7, 2019 at 12:50 PM Larry Gritz <[email protected]> wrote:

> Hi, I'm so sorry that your original message fell through the cracks.
>
> By default, oiiotool converts all images it reads to 'float' data type for
> its in-memory representation. That's because it assumes that the usual case
> is to do a bunch of math operations, which really need to be float anyway
> (both because it's faster, as well as to preserve full precision through
> intermediate calculations). It remembers which format it was originally,
> though, so when it writes output to a file, you end up with what you
> started with, unless you use -d to override.
>
> This means that if your image is uint8 in the file, it has a 4x in-memory
> expansion, and if it's half or uint16, it's a 2x expansion. Generally,
> that's not a big deal, because most people want the math to be fast and
> mostly have images that can easily fit in memory. But in your case, that
> may push you to unacceptable memory sizes for really huge images. And it's
> even worse, because for really large images, oiiotool reads everything
> through the ImageCache, so there may be a second copy (or partial copy)
> still lurking in the cache.
>
> There is an option to oiiotool to turn this off:  --native. And also the
> `-i` takes a `now=1` modifier to force it to bypass the cache.
> For example,
>
>     oiiotool -native -i:now=1 scanline.tif -tile 64 64 -o tiled.tif
>
> Should have much lower memory consumption if the original scanline file
> was 8 or 16 bits per channel, because it will bypass the cache and keep the
> data in its original type.
>
> You noticed that iconvert does better, because it always keeps the data in
> native format; there is no image cache and no expansion to float (because
> all iconvert can do is read and write, doing format and data conversions,
> but no "image math", so there is no reason to want things in float).
>
> So use of either oiiotool --native or iconvert expands your memory limit
> (possibly by up to 4x, plus saving the cache), but there is still a limit,
> because both oiiotool and iconvert (and for that matter maketx) are mostly
> oriented to reading full input images in.
>
> If you are doing a straight file format conversion or scanline-to-tile
> ("-i infile -tile 64 64 -o outfile"), then you can also try using the cache
> and using auto-tiling (will break up even a scanline file into "tiles" so
> that it can use the cache efficiently, because the cache doesn't help you
> if it has to treat the entire input image as one cache tile).
>
>     oiiotool -cachesize 4096 -autotile 1024 -native infile -o outfile
>
> I think that should be able to handle any image file size, because the -o
> will draw from the cache a bit at a time.  (It will be slower than not
> using those options.)
>
> OK, but that only works well if you don't have any "intermediate
> calculations", for example
>
>     oiiotool ... infile -colorconvert sRGB ACES -o outfile
>
> Because the --colorconvert (or any other math, or the process of turning
> it into a MIPmap, if you want it both tiled and multi-resolution) will be
> the one that pulls it all from the cache and makes an intermediate copy. Oh
> well.
>
> So if you are running into that problem and the cache doesn't help, and
> --native isn't enough (because the image is too big, or because it's float
> to start with), then you have two choices:
>
> 1. Use C++ or Python (as you've discovered) to directly read and write
> bundles of scanlines or tiles (doing any processing you need on the partial
> reads before you write them out). You don't need to worry about "if the
> format supports that" -- in fact, any format that doesn't support multiple
> scanline or tile reads and writes will automatically emulate it with a
> succession of single-scanline or single-tile reads or writes.
>
> 2. Occasionally people do ask for a "low memory" mode for maketx in
> particular, for enormous textures. It's not conceptually hard. You could do
> this and submit a PR! (I would give you some help.) Or, I suppose, wait for
> me to have the free time. But I'm juggling a lot of projects right now.
>
> Does this help?
>
> *To summarize:*
>
> * First, try oiiotool with --native and -i:now=1. If that's good enough,
> you're done.
>
> * Next, try oiiotool with --native and --autotile (and NOT -i:now=1).
> Maybe that does the trick (though slowly?).
>
> * If you are just doing a straight file format conversion or
> scanline-to-tile conversion, try a simple python script that reads
> scanlines in batches of the tile height, and outputs one "tile row" at a
> time.
>
> * If that's still not enough and you really need a full maketx or oiiotool
> functionality, but in a low-memory out-of-core mode, that is a bigger
> operation and possibly a significant redesign.
>
> -- lg
>
>
> On Jul 6, 2019, at 6:56 PM, jon parker <[email protected]> wrote:
>
> Since writing this question, I've noticed two things:
>
> - iconvert uses less memory than oiiotool, perhaps?
> - One can work around oiiotool using Python bindings and read / write
> bundles of scanlines, which works for formats that support it.
>
>
> On Wed, Nov 28, 2018 at 4:14 PM jon parker <[email protected]> wrote:
>
>> Hi OpenImageIO developers,
>>
>> Apologies if this is a repeat as I couldn't search the archives for some
>> reason... it seemed broken on my workstation.   This is a feature request
>> put here... IIRC Github issues are for problems with code and not requests.
>>
>> It looks to me like oiiotool will first load an entire image into memory
>> when processing, even when it can be avoided.  Is there a compile flag to
>> make it more conservative?
>>
>> The problem domain I currently have is in trying to process extremely
>> large surface maps of Earth and other bodies and would like to convert them
>> into a tiled format to load as a texture.  Some of these images however are
>> more than 100K pixels wide and most computers don't have enough memory to
>> load them entirely.
>>
>> I can get halfway there with GDAL but cannot create a .tx or convert
>> color spaces without oiio.  I'm aware of workaround solutions such as
>> pre-slicing but would like to avoid that.
>>
>> Cheers,
>> Jon
>>
> _______________________________________________
> Oiio-dev mailing list
> [email protected]
> http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
>
>
> --
> Larry Gritz
> [email protected]
>
>
>
>
> _______________________________________________
> Oiio-dev mailing list
> [email protected]
> http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
>
_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to