I think what Harald is saying ( if he's not, then I say this):
You could have an alternative table structure like this: - it should
make queries much quicker:
create table raddata_2004_10_ONE (
granID integer not null,
scanID tinyint unsigned not null,
fpID
Quoting Tom Cunningham [EMAIL PROTECTED]:
I think what Harald is saying ( if he's not, then I say this):
You could have an alternative table structure like this: - it should
make queries much quicker:
create table raddata_2004_10_ONE (
granID integer not null,
scanID
In article [EMAIL PROTECTED],
Ken Gieselman [EMAIL PROTECTED] writes:
The second issue is query performance. It seems that regardless of
what fields are selected, it reads the entire row? Since a monthly
table averages 840GB, this takes a while, even on a well-organized
query like 'Select
Hi Folks --
Ran into a couple performance issues, and looking for some optimization tips :)
I'm currently using MySQL 4.1.5-gamma, built from the bitkeeper tree a month or
so ago. I have a table which is roughly 2500 columns by 91 million rows (I get
4 of these a month, from the data we're
DVP
Dathan Vance Pattishall http://www.friendster.com
-Original Message-
So, is there a faster way to insert/index the data? Would a different
table or
index type improve performace?
Use Load data from infile .. IGNORE ... u might get a better insert speed
increase. A
I don't think that he is worried about table scanning, he is worried about
ROW scanning. Each of his rows is so large (2500*(size of float) +
3*(size of tinyint) + some other stuff) that just moving that much data
around through his machine is consuming too much time.
If you have a query
Quoting [EMAIL PROTECTED]:
Look at some my.cnf options. You can tell mysql to use keys more often the
table scans with a var called max_seeks_keys=100 // something like that
Definitely. In fact, that's not really the issue at hand, since
max_seeks_for_key is already set to 1000 here. Shawn hit
Hi,
On Thursday, October 21, 2004, at 04:40 PM, [EMAIL PROTECTED] wrote:
I don't think that he is worried about table scanning, he is worried
about
ROW scanning. Each of his rows is so large (2500*(size of float) +
3*(size of tinyint) + some other stuff) that just moving that much data
around