Interesting questions. I'll add some opinions to the mix.

1. It's 2021, computer memories are bigger and so are our images. Should I just raise the default cache size to 1GB? More?
I think raising the default to 1 or 2 GB is totally reasonable.

2. Should ImageBuf internally track the total amount of local memory held by all ImageBuf's, and until the total reaches a threshold (of maybe a few GB), IB's should hold their memory locally and only have any of them fall back to ImageCache backing once the total is above the threshold? That would probably generally make them a bit faster than now usually, until you have a bunch of them and the cache starts to kick in.
To be honest, I've always found the default cache-backed ImageBuf behavior a little odd from an API standpoint. If I were coming at the API with zero prior knowledge, I think I would intuitively expect some slightly different organization and usage patterns for the types at play:
- ImageBuf would be a "lowest common denominator" type of class. Instantiating it directly would use local pixel buffer storage by default.
- ImageCache would provide a method for creating cache-backed ImageBufs. Either that, or ImageBuf would include a constructor overload or other static factory function that allowed an ImageCache to be passed as the backing store.
    - These patterns might make more sense if they instead involved an interface class (e.g. something like an abstract CacheInterface, with an implementation for ImageCache), but the end result would be about the same.
    - Either way, I think this would make it easy for an application to manage multiple cache "pools" on its own terms.

I know that's getting a bit off track from your question, but I generally like the idea of ImageBuf being relatively "dumb" out of the box, with any "smart" behavior implemented on a subclass of some kind (e.g. CachedImageBuf), or via another API layer.

3. Allow the cache to be self-adjusting if it sees thrashing behavior.
I agree with Phil on this: I wouldn't want this behavior unless it was opt-in, since it could easily wreck havoc on a 3D render or other heavy TextureSystem client.

If implemented, I also agree that it should hinge on more than just an enabled/disabled state. The simple parameterization you mentioned (target size + hard cap) sounds like a reasonable baseline, although my next question would be whether the cache would "contract" at some point as pressure backs off to try and maintain the target size.

-Nathan
_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to