> My suspicion is that this is related to a growing number of files held open > by the underlying ImageCache, which will not be immediately closed or free > just because you clear() or even destroy the ImageBuf. Or possibly by the > overhead of what happens when the maximum number of open files is reached?
I was coming to a similar conclusion when I was trying to modify files in my test bed, and the OS was complaining that I still had open filehandles. > The default is 100, I'm curious what happens if you just raise that. On > Linux/OSX it should be safe to go into thousands, but Windows usually has > some lower limit, to let's try 500 just to see what happens to your timings. > In particular, does the slowdown seem to come later, i.e. after more files > have been touched? I ran some more tests locally and, unfortunately, I actually didn't see any improvement using the cache. Nor do I see much improvement using the invalidate method you described. I've attached a script and some graphs if you wanted to take a look. The only real requirement is that you populate a local directory with the same image 2500x (if you can't get the attachments through here, let me know and I'll email them direct). The interesting bit, is that I always get a linear-ish increase in time with the imagebuf, but if I graph the time it takes to open an ImageInput and query the spec, its pretty constant. Curious to know if you have any other thoughts! Thanks again! -J
oiiotest_imgbuf_perf.py
Description: oiiotest_imgbuf_perf.py
_______________________________________________ Oiio-dev mailing list [email protected] http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org
