On Mon, Dec 20, 2010 at 11:00 AM, Harald Friedrich <[email protected]>wrote:
> > We are producing the CAD-Software TARGET > 3001!<http://en.wikipedia.org/wiki/TARGET_3001%21>and have switched our > electronics components libraries to a SQLite database > lately. As we also sell multiuser licenses, some of the customers want to > have only one database, accessed from several (e.g. up to 10) workstations. > 98% reading only. We got the experience that the performance slows down > significantly, if more than one user accesses the database on a Linux or > Windows server simultaneously. And even if the multiuser access ends, the > slowliness remains until the last user of the multiuser access has closed > our CAD program. Even a Close of the database doesn't help. It seems to be > a kind of locking? Any ideas? > I'm guessing this is the effect of your network filesystem trying to do caching. SQLite really is intended to have the application and the database disk file on the same host - not separated by a network. Consider the folowing ascii-art diagram showing the relationship of an application, a database engine, and the disk drive: +------+ +-----------+ +------+ | App |<---->| DB Engine |<---->| Disk | +------+ +-----------+ +------+ The high-bandwidth link is the one between the DB engine and the disk. The DB engine does a lot of filtering so that much less traffic flows from the DB Engine to the application than flows from the Disk to the DB engine. Suppose the disk and the application are on separate computers. Then there has to be a network link somewhere. In a traditional client/server database (ex: MySQL, PostgreSQL) the DB engine and the Disk are located on the same computer and the application is on a separate computer. So the slow network link is on the connection with the least traffic. This good. But with SQLite, the DB Engine and the Application are on the same computer and the disk is on a different machine. That means that the high-bandwidth link between the DB Engine and the disk has to go over the network. This is bad. You have several options: (1) Switch to PostgreSQL. (2) Create your own custom mini-SQL-server using SQLite and your own protocol for your applications to talk to that min-server over the network. (3) Replicate the database to each host, if it really is read-only. Regarding option (3), you might work out some hybrid scheme where the application opens the canonical SQLite database over the network filesystem, then copies the content it needs into a local in-memory or temp-disk-space database, then closes the remote database. Then it uses the local copy, which should be very fast. The sqlite3_backup() interface might be useful to you for making the local copy. -- D. Richard Hipp [email protected] _______________________________________________ sqlite-users mailing list [email protected] http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

