Over NFS you are limited to the bandwidth of your network, probably 1-10 Mb/s. Compare that to the disk speed on your host server, one or two orders of magnitude better. The NFS link could be up to 50 times slower.

If you want better distributed performance use a DBMS server like PostgreSQL.

Ritesh Kapoor wrote:
Hi,

Most of us are aware that SQLite on Linux has problems with NFS mounted
volumes - basically the file locking issue.

If there is a single thread accessing the SQLite DB - and this is
guaranteed - then there is no need to have file locking.  So I modified
the code for SQLite and removed all locking mechanism.

However, the performance of SQLite insert/delete of rows varies a lot
when the DB file is local or accessed over NFS.

I've also removed the Synch-mechanism and increased the SQLite page size
as well as the number of pages to hold in cache.

I understand that all this can cause data loss if the system crashes but
that is tolerable.

What I can't figure out is that the performance over NFS is still
horrible.  My application requires inserting one row at a time, many
times in a single run.  I can't use transactions when inserting but I've
used them for deletion.

Can anyone give me some more ideas to work with.  Does this performance
problem happen with the other DB's available as well?

Reply via email to