I think what Harald is saying (& if he's not, then I say this):

You could have an alternative table structure like this: - it should
make queries much quicker:


create table raddata_2004_10_ONE (
 granID               integer not null,
 scanID               tinyint unsigned not null,
 fpID                 tinyint not null,
 c1                   float not null,
...
 c500                float not null,
)

create table raddata_2004_10_TWO (
 granID               integer not null,
 scanID               tinyint unsigned not null,
 fpID                 tinyint not null,
 c501                   float not null,
...
 c1000                float not null,
)

But maybe with a more logical way of partitioning the columns among
the different tables.

Or, another option, perhaps you've thought of, have a table like this:
should make indexing much worse, but querying much better.

create table raddata_2004_10_TWO (
 granID               integer not null,
 scanID               tinyint unsigned not null,
 fpID                 tinyint not null,
 cID                   smallint not null,
 cValue               float not null,
  primary key (granID, scanID, fpID, cID)
  index (granID, cID)
)

OLD QUERY:
Select c1 from raddata_2004_10 where granID between 147568 and 150000

NEW QUERY:
Select cValue from raddata_2004_10 where granID between 147568 and
150000 and cID=1;
(should be v. fast)

---> incidentally: I have a question: when you have a table like this
with a primary key which has a lot of columns, is there any
performance benefit to adding a new primary key, as an auto-increment
column, & keeping the old primary key as a unique index? I thought
maybe there might be some addressing / hashing issues which worked out
quicker?

Tom.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to