Quoting Tom Cunningham <[EMAIL PROTECTED]>:
I think what Harald is saying (& if he's not, then I say this):
You could have an alternative table structure like this: - it should
make queries much quicker:
create table raddata_2004_10_ONE (
granID integer not null,
scanID
I think what Harald is saying (& if he's not, then I say this):
You could have an alternative table structure like this: - it should
make queries much quicker:
create table raddata_2004_10_ONE (
granID integer not null,
scanID tinyint unsigned not null,
fpID
In article <[EMAIL PROTECTED]>,
Ken Gieselman <[EMAIL PROTECTED]> writes:
> The second issue is query performance. It seems that regardless of
> what fields are selected, it reads the entire row? Since a monthly
> table averages 840GB, this takes a while, even on a well-organized
> query like 'S
Hi,
On Thursday, October 21, 2004, at 04:40 PM, [EMAIL PROTECTED] wrote:
I don't think that he is worried about table scanning, he is worried
about
ROW scanning. Each of his rows is so large (2500*(size of float) +
3*(size of tinyint) + some other stuff) that just moving that much data
around th
Quoting [EMAIL PROTECTED]:
Look at some my.cnf options. You can tell mysql to use keys more often the
table scans with a var called max_seeks_keys=100 // something like that
Definitely. In fact, that's not really the issue at hand, since
max_seeks_for_key is already set to 1000 here. Shawn hit th
I don't think that he is worried about table scanning, he is worried about
ROW scanning. Each of his rows is so large (2500*(size of float) +
3*(size of tinyint) + some other stuff) that just moving that much data
around through his machine is consuming too much time.
If you have a query that
DVP
Dathan Vance Pattishall http://www.friendster.com
> -Original Message-
>
> So, is there a faster way to insert/index the data? Would a different
> table or
> index type improve performace?
Use Load data from infile .. IGNORE ... u might get a better insert speed
increas