> Any suggestions?

If you know that you will likely only perform per-site queries then
you want all the readings for a given site contiguous in the database 
file (or files). You can accomplish that in many ways, as you've outlined.

Hopefully your reading_id's always increase as time goes forward.
Consider collapsing timestamp and reading_id into one value (timestamp)
and make that your primary integer key if you can.

No point guessing about the various strategies - try all your ideas and 
do timings using typical queries under normal usage patterns (vacuumed vs. 
no-vacuum). There are benefits to seperating the data into seperate 
databases and/or tables (reducing row size by eliminating site_id) as 
well as keeping it all together (reducing your admin/coding effort).
In general you want to reduce the reading row size to a minimum number
of bytes to obtain greater insert and query speed.

The manner and order in which you populate your tables also plays a 
large role. If you populate data from all sites each day into a
one-database/one-table solution without vacuuming then data from each
site will be farther apart in the database file, and your per-site 
queries will take longer. If you bulk load all the data at once prior 
to analysis do so one site at a time with rows inserted in order of time.
That will reduce or eliminate the need to vacuum the database.




 
____________________________________________________________________________________
Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to