I have a table in a stock analysis database that has 284 columns.  All
columns are int, double, or datetime, except the primary key
(varchar(12)).  By my calculation this makes each row 1779 bytes.

CREATE TABLE technicals (
  symbol varchar(12) primary key,
  last double,
  open double,
  high double,
  ... etc ...
  ma_5day double,
  ma_20day double,
  ... etc ...
);

I've been looking into splitting it into multiple tables, i.e.:

-- Basic info
CREATE TABLE technicals_basic (
  symbol varchar(12) primary key,
  last double,
  open double,
  high double
);

-- Moving averages
CREATE TABLE technicals_ma (
  symbol varchar(12),
  ma_5day double,
  ma_20day double,
  FOREIGN KEY symbol REFERENCES technicals_basic(symbol)
);

Will splitting this into multiple tables provide any performance
benefit?  Queries will never ask for all 284 columns, so under the new
system, even if split into 20 tables, there would be a maximum of 2-3
joins per query.  The question is whether the row size reduction
provides enough benefit to introduce table joins.

I need to be able to search and sort by ANY of the 284 fields at times,
so my concern is that the current large row size will affect the speed
of sequential scans of the table (60,000 rows).  Is that worry
justified?

-j


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to