Sorry for being the one breaking the 256MB barrier :/ At our studio we're transitioning to this workflow as it makes compositors' lives easier. The 1GB fix you gave me helped a lot, but if others are working the same way, option 1, increasing the default might make sense. I like option 2 and 4 as well. Option 4 will require more of IC wrapped in python I guess?
On Fri, Nov 12, 2021 at 9:27 PM Larry Gritz <[email protected]> wrote: > I don't think that's off-track at all, thanks so much for the comments. > > I don't think we need a specialized class. There is already an ImageBuf > constructor that takes an optional pointer to an ImageCache. The > interpretation had always been to allow you to specify a particular > ImageCache as backing for that IB, rather than using the default global > cache. > > But an alternate interpretation we might prefer in the future is that an > IB is backed by IC *only* when you use the constructor that tells it about > an IC. Let's call this option (4). This would make most uses of IB faster, > though apps who need the underlying IC memory management (because they are > using enormous images, or many many IB's) would need to proactively request > it on a per-IB basis. That sounds very reasonable to me. > > This can still be combined with any of 1-3, it's somewhat of an orthogonal > choice. For example, even when requesting an IB with IC backing, maybe you > want a threshold where it still uses local memory instead of the IC if the > total consumption of local-mem IB's is relatively small. > > Contracting -- maybe? I think that a mode where it can grow if it detects > bad thrashing is a lot easier to implement than knowing when it's safe to > re-contract. I thought of it as a one-way ratchet, but maybe not? > > I do like your idea of "IBs are dumb (local mem) unless you tell it an IC > to use." That makes sense to me, and I think it will improve performance > across the board for the majority of applications where you aren't dealing > with enough image data to need the scalability of IC backing. > > > On Nov 12, 2021, at 12:07 PM, Nathan Rusch <[email protected]> wrote: > > Interesting questions. I'll add some opinions to the mix. > > 1. It's 2021, computer memories are bigger and so are our images. Should I > just raise the default cache size to 1GB? More? > > I think raising the default to 1 or 2 GB is totally reasonable. > > 2. Should ImageBuf internally track the total amount of local memory held > by all ImageBuf's, and until the total reaches a threshold (of maybe a few > GB), IB's should hold their memory locally and only have any of them fall > back to ImageCache backing once the total is above the threshold? That > would probably generally make them a bit faster than now usually, until you > have a bunch of them and the cache starts to kick in. > > To be honest, I've always found the default cache-backed ImageBuf behavior > a little odd from an API standpoint. If I were coming at the API with zero > prior knowledge, I think I would intuitively expect some slightly different > organization and usage patterns for the types at play: > - ImageBuf would be a "lowest common denominator" type of class. > Instantiating it directly would use local pixel buffer storage by default. > - ImageCache would provide a method for creating cache-backed ImageBufs. > Either that, or ImageBuf would include a constructor overload or other > static factory function that allowed an ImageCache to be passed as the > backing store. > - These patterns might make more sense if they instead involved an > interface class (e.g. something like an abstract CacheInterface, with an > implementation for ImageCache), but the end result would be about the same. > - Either way, I think this would make it easy for an application to > manage multiple cache "pools" on its own terms. > > I know that's getting a bit off track from your question, but I generally > like the idea of ImageBuf being relatively "dumb" out of the box, with any > "smart" behavior implemented on a subclass of some kind (e.g. > CachedImageBuf), or via another API layer. > > 3. Allow the cache to be self-adjusting if it sees thrashing behavior. > > I agree with Phil on this: I wouldn't want this behavior unless it was > opt-in, since it could easily wreck havoc on a 3D render or other heavy > TextureSystem client. > > If implemented, I also agree that it should hinge on more than just an > enabled/disabled state. The simple parameterization you mentioned (target > size + hard cap) sounds like a reasonable baseline, although my next > question would be whether the cache would "contract" at some point as > pressure backs off to try and maintain the target size. > > -Nathan > _______________________________________________ > Oiio-dev mailing list > [email protected] > http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org > > > -- > Larry Gritz > [email protected] > > > > > _______________________________________________ > Oiio-dev mailing list > [email protected] > http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org > -- -Daniel
_______________________________________________ Oiio-dev mailing list [email protected] http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
