> >So would it be possible to run that command each time you open the config
> database and after any change to it ?  That would give us a perfect way to
> find out which commands were causing your problems.<
> 
> Not really possible. The average update rare is low, but there are times when
> hundreds of settings are written, depending on which changes the user
> makes in preferences etc. Users can update settings from custom scripts,
> which may mean one update per session or hundreds per minute. Running a
> 5 second integrity check after each write would bring down performance
> badly.
> 

I had got the impression (and probably Simon had too) that the preferences 
database
was a lot smaller, so the integrity check would be a lot quicker.

> I now also run an integrity_check when closing the settings database during
> application shut-down and will seek to find a way to notify the user to retain
> the log file - in the hope that it contains more info. My users are no IT 
> folks,
> just average users, moms & pops. Displaying scary error messages about
> damaged databases and asking to send log files will cause a lot of additional
> support and probably bad reviews in social media. Database damage is a very
> sensitive area.
> 

I wonder whether you could provide those that have suffered a corrupt database
with "special" code with extra logging and checks. You could warn them about 
scary
messages and longer delays. Explaining this is part of your investigations. 
These
people already know there is a problem, so are unlikely to spread bad reviews.
They may also be more likely to suffer another corruption if it relates to a 
particular
workflow.

As a user of your application I would be happy to help, unfortunately (not for 
me :^)
I've never had a corrupted database.

Regards

Andy Ling


Reply via email to