On Fri, 7 Mar 2008 21:06:21 +1100 (EST), Jeff Brown
<[EMAIL PROTECTED]> wrote:
>"so, the file locking logic of many network
>filesystems implementation contains bugs (on both Unix
>and windows). If file locking does not work like it
>should, it might be possible for two or more client
>programs to modify the same part of the same database
>at the same time, resulting in database corruption."

If concurrent accesses are rare, you can probably get away with moving
to a server-less DB like SQLite.

To provide some kind of protection, you could
- either write your own server process to which all clients should
send their SQL queries,
- or provide optimistic locking by keeping track of changes through a
central database

As extra protection, you could add some kind of semaphore to this
table through either using the OS' locking mechanism while making
changes to this table, or simply create/delete a file that will be a
way to let other clients know that another process is busy updating
the table.

If performance is too slow on a 10Mbps LAN, it'll go away by switching
to 1Gbps, which might be a viable alternative to moving to a C/S
solution (rewriting, deploying, etc.)

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to