Am 27.08.2012 03:23, schrieb bruceg113...@gmail.com:
My program uses Python 2.6 and Sqlite3 and connects to a network
database 100 miles away.
Wait, isn't SQLite completely file-based? In that case, SQLite accesses
a file, which in turn is stored on a remote filesystem. This means that
there are other components involved here, namely your OS, the network
(bandwidth & latency), the network filesystem and the filesystem on the
remote machine. It would help if you told us what you have there.
My program reads approx 60 records (4000 bytes) from a Sqlite
database in less than a second. Each time the user requests data, my
program can continuously read 60 records in less than a second.
However, if I access the network drive (e.g. DOS command DIR /S)
while my program is running, my program takes 20 seconds to read the
same 60 records. If I restart my program, my program once again takes
less than a second to read 60 records.
Questions here:
1. Is each record 4kB or are all 60 records together 4kB?
2. Does the time for reading double when you double the number of
records? Typically you have B + C * N, but it would be interesting to
know the bias B and the actual time (and size) of each record.
3. How does the timing change when running dir/s?
4. What if you run two instances of your program?
5. Is the duration is only reset by restarting the program or does it
also decrease when the dir/s call has finished? What if you close and
reopen the database without terminating the program?
My guess is that the concurrent access by another program causes the
accesses to become synchronized, while before most of the data is
cached. That would cause a complete roundtrip between the two machines
for every access, which can easily blow up the timing via the latency.
In any case, I would try Python 2.7 in case this is a bug that was
already fixed.
Good luck!
Uli
--
http://mail.python.org/mailman/listinfo/python-list