I have an application that uses zlib to save all object data to a compressed 
stream usinf zlib version 1.1.4. 

This stream is being stored by saving it as a file and after an 
automation-event,  Progress code will read this file and write it to the 
database. (transfer via file was much faster than direct transfer)

All works well for more than two years now, except at a few companies where 
they operate in a Powerfuse and Citrix environments. It happens once every 
month (1 on 1000 sore cycles) that these stored blob data could not be 
uncompressed or even the blob has been truncated at random places (and could 
also not be decompressed). I could not locate the cause of this problem

My question: Within Powerfuse and Citrix servers it's possible to limit the max 
amount  of memory and resources for a user. Zlib needs some memory to perform 
compression, could this be a problem ?

Does anyone using zlib compression, sometime encounter problems with faulty or 
truncated data?  

Some data, as long as it is not missing parts of the blob, that returns an 
error code  (error check generated within the decompressing algoritm) could be 
rescued by disabling all decompressing error checks: these results does seen to 
be correct.


Any users of zlib  (and maybe powerfuse) that does have some ideas on where 
these inconsistent data could be generated?

All comments and ideas wil be appreciated.

Best wishes for 2008 and the upcomming holidays.

Regards

Andries Bos
The Netherlands





      
____________________________________________________________________________________
Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

[Non-text portions of this message have been removed]

Reply via email to