On 23 Aug 2010, at 3:51pm, Chad Rebuck wrote:

> I have a php script that goes though a 60,000 row table (currently a 100mb
> sqlite 3 format database that could grow to 200mb) and looks for rows that
> are missing data.  Then the script retrieves data (whois queries and
> traceroute results) and inserts it into the table.  I want to run
> many occurrences of this script to populate the database quicker, but I am
> getting into trouble with the database being locked constantly if I try to
> run only two instances of the script.

Thanks for that description which saves a lot of tedious Q&A before we can 
figure out what you want.

> The faq indicates concurrency really isn't necessary for most situations
> with the overall speeds of today's computers.  However, my server is a 12
> year old ibm pc running linux with 64m of memory.   The harddisk speed is
> very slow.  Is this going to prohibit running more than one process that
> writes to a single database table?

It depends on which part of the single-process version is the bottleneck.  It 
may be that the slowest element of your task is writing to the database file.  
If that's the case then splitting the job into several different occurrences 
will make no difference, since they all have to write to the same database file 
in the end anyway, and only one process can do that at once.  A slow hard disk 
suggests that this is the case: intensive disk activity like writing to a SQL 
database is probably occupying most of your application's time.

If the task of finding the missing data was taking the majority of the time you 
might get some advantage from using multiple processes: one might operate while 
another was waiting for network activity, for example.

Given the limitations of your hardware you might consider a different way of 
splitting up your application: have one task that finds all entries with 
missing data.  Then have many (parallel ?) processes find that information but 
write it to a long text file.  Then finally have another process that reads the 
textfile and makes changes to the database all in one giant transaction.  That 
might speed things up.

> I am new to databases so perhaps there is something very basic I am
> overlooking here.  Wanted to give sqlite a real effort before trying mysql.

A slow hard disk will cause the same problem here.  In fact, since MySQL is 
larger and more complicated than SQLite, it does more work when writing the 
file to disk, so you may get even worse results with it.  On the other hand 
MySQL makes heavy use of caching, so if you have enough memory free it needs to 
do less disk activity.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to