The problem is that I need to search/sort by ANY of the 284 fields at
times - 284 indexes is a bit silly, so there will be a lot of sequential
scans (table has 60,000 rows). Given that criteria, will fewer columns
in more tables provide a performance benefit?
-j
On Tue, 2009-08-04 at 16:03
I have a table in a stock analysis database that has 284 columns. All
columns are int, double, or datetime, except the primary key
(varchar(12)). By my calculation this makes each row 1779 bytes.
CREATE TABLE technicals (
symbol varchar(12) primary key,
last double,
open double,
high