Note that zlib is the same compression as ZIP (LZW) and so results should be similar. Plucker could _possibly_ get better compression by using a different algorithm (bz2,7z,etc.) but it would take longer to create documents and probably longer to display them also.
Using progressive compression would make more sense, actually. I'm not clear on how this would be implemented, but doing a per-record comparison of compressed vs. uncompressed records, and storing only the smaller of the two, would decrease the final .pdb size quite a bit for large files or those with lots of images. Compressing jpeg files in some cases, will result in _larger_ records than the original data. Scale that out for a document like the Java 1.4.2 API docs (25M Plucker document) with several hundred images, and you can see where the savings weigh in.
It would also slow down the creation of the content quite a bit, to have to do this comparison at creation time, but thats mostly negligible when you consider the savings of final distributed file size.
David A. Desrosiers [EMAIL PROTECTED] http://gnu-designs.com
_______________________________________________ plucker-list mailing list [EMAIL PROTECTED] http://lists.rubberchicken.org/mailman/listinfo/plucker-list

