I *KNOW* doing transactional work (Even just standard selects) can cause
problems via a network share with SQLite due to networking `bugs` or
whatever.  My question is, does/would the backup api have the same problem
if I were to backup a remote file to memory or local storage, work on data
locally, then when needed, write back to the original location with the
same backup mechanism?

I do acknowledge the remote file can still be modified, but I can deal with
that kind of condition via changing file attributes, or, renaming the
remote file, or lock files, or something else.

I ask this because I'm pondering on switching from flat file storage to a
database infrastructure for better 'versioning' of the data contained
within, but, I'll be testing on a GBit network with a minimum of two GBit
switches and almost 0.01% utilization between points, however, worse case
scenario thinking customer sites may still only be using 10mbit hubs or
token ring. *shiver*
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to