> I would disagree with this, unless I misunderstand.  File copies (from the
> Finder under OS X) to/from our Xserve run at about 50 MBytes/s or about 50%
> of theoretical max on our Gbit LAN, whereas reading the records from the
> same file via SQLite is 20-25x slower (—2MB/sec at best, terrible
> performance).  So there is plenty of raw I/O bandwidth across the LAN and
> network drive, but for some reason SQLite access to its remote files is
> extremely slow (to be clear: these are single users accessing single files).

Peter, there is a lot more latency over a network than just hitting
a local disk as well, so you've got potentially hundreds of requests
from disk to perform a single select on the database (traversing
the Btree, etc).  Your OS may perform some read-aheads and caching
which would reduce the latency to nearly nothing for the disk access
(on a local machine), but you're having to deal with network latency
and protocol overhead on _each_ of those hundreds of requests
when you're working over a network. Raw sequential throughput you
mentioned really has no relevance here at all.

Like Richard said, use the right tool for the job.  You need a
database that resides on the server and communicates using its
own network protocol.  If you'd like to continue using SQLite
you might check out some of the server/client wrappers out there:
http://www.sqlite.org/cvstrac/wiki?p=SqliteNetwork

You've got to realize that no other (non-server based) database would
be able to perform better in this situation.

-Brad
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to