Igor Tandetnik wrote:
> "Rosemary Alles" <al...@ipac.caltech.edu> wrote
> in message news:af79a266-b697-4924-b304-2b1feccba...@ipac.caltech.edu
>   
>> Run on a single processor, the following query is quite fast:
>>
>> When concurrency is introduced (simply running the query on several
>> processors against the same database - say 300 instances of it) causes
>> a massive slow down
>>     
>
> Well, you may have multiple CPUs, but you only have a single hard drive. 
> That drive head can't be in multiple places simultaneously.
>
> Igor Tandetnik
>
>   
Further to Igor's point, the machines we use today almost always have 
one hard disk and database activity involves sharing that single 
resource between users.  If there is heavy disk activity you will get 
maximum throughput by having one process running at a time.  If there is 
sporadic disk activity interspersed with other processing you will get 
more throughput with concurrent processes.

Remember that all multi processing and multi threading involves a 
substantial overhead.  The throughput on old, slow computers with a 
simple monitor instead of a multi tasking O/S was very impressive. 
because there was no task switching overhead  Machines optimized for 
multi-user database activity have many disk spindles with the database 
distributed between them so that the disk farm is no longer a single 
resource.
> _________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to