FWIW, instead of creating one big table you may be able to partition the 
data on the database level.  I know postgres does this, not sure about 
others.

If you're unfamiliar with it, you would create 1 master table that has an 
id field for the stock.  Then you create a partition for each stock.  You 
interact with one table, but the database would partition it all on the 
filesystem to 500 tables (or fewer if you use ranges).  

You'd still have a bigger dataset from the overhead needed for the id on 
each row (ie a small integer column), but should have roughly similar speed 
and performance to using 500 separate tables.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to