On Apr 28, 2015, at 6:44 AM, Richard Shaw <[email protected]> wrote:

> The bulk of the very large images were from a DSLR and I'm guessing the jpeg 
> quality was set to 100 so I did something like:
> 
> find ./ -size +6M -exec oiiotool {} --quality 80 -o {} \;



That's a really great solution! Much more elegant and simple than anything I 
was thinking of. I think because I mostly deal with lossless image formats, I 
was thinking of a different set of strategies entirely. 

By dialing down the quality, you are getting rid of fine detail (and a lot of 
the compressed size was representing this pointless detail). This trick really 
works well for JPEG, or really any format supporting lossy compression that can 
be dialed. Though it only reduces the on-disk compressed file size, it doesn't 
reduce the in-memory size of the image.

Just for the sake of discussion, here's an idea I was thinking of last night: 
Downsize 2x, then upsize again 2x (with good filters in both directions), and 
compare to the original. If the diffs are below some threshold, then keep the 
lower resolution one.

The full generalization of this strategy would be to generate a Laplacian 
pyramid -- it's a multiresolution representation like a MIP-map, but instead of 
each level having a full copy of the image at that resolution, it only contains 
the differences necessary to reconstruct the R resolution from the R/2 
resolution level. So you construct the pyramid, cull the high resolution levels 
whose RMS energy is below some threshold, and reconstruct the medium-resolution 
image to save. I'm sure that at some point, somebody has written a thesis 
describing how to do this so that you only get rid of high-res levels if they 
don't contribute much perceptually to the final image.

--
Larry Gritz
[email protected]



_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to