Ricardo, The best performance solution is to create another column to contain (time_utc-1004127737) div 86400
Update the table to set this value correctly for every row, then calculate the value for this column every time you insert more data. Create an index on prefix and the new column and this should become quick, especially if you tune sort buffers and cache sizes. This solution is obviously at the expense of more storage space, but it will speed things up. Andy > -----Original Message----- > From: Brent Baisley [mailto:[EMAIL PROTECTED] > Sent: 23 September 2004 13:48 > To: [EMAIL PROTECTED] > Cc: [EMAIL PROTECTED] > Subject: Re: Working with 160M entries table > > I don't think indexes have anything to do with it, unless you have an > index on the prefix field, in which case it might use it for the > grouping. But regardless, you are selecting all 160M records (no index > used) AND doing 160M calculations (no index used) and then grouping > 160M records (maybe an index used). That's a lot of stuff going on for > which you should have lot of memory, fast I/O and the proper my.cnf > settings. For instance, if you sort buffer setting in the config file > is set pretty high, this may take quite a while. > > On Sep 23, 2004, at 3:35 AM, Ricardo Oliveira wrote: > > > Hi, > > > > I'm doing a query on the following table: > > +----------+-------------+------+-----+---------+-------+ > > | Field | Type | Null | Key | Default | Extra | > > +----------+-------------+------+-----+---------+-------+ > > | time_utc | int(11) | | PRI | 0 | | > > | prefix | varchar(18) | | PRI | | | > > +----------+-------------+------+-----+---------+-------+ > > > > The query is: > > --------------------------------------------------------- > > select prefix, ((time_utc-1004127737) div 86400),count(*)from > > t129_250_0_11 group by prefix,((time_utc-1004127737) div 86400); > > --------------------------------------------------------- > > I'm doing it on a dual processor 2Mhz Athlon 64-bit machine. The only > > problem is that the table has about 160,000,000 (160M) entries and i'm > > afraid that disk i/o looking for indexes is taking most of time. The > > query has been running for almost 10 hours now. Do you have any > > sugestions that might speed up the query? Do you think removing the > > indexes will speed up things? > > > > Thanks in advance for your help! > > > > --Ricardo > > > > > > -- > > MySQL General Mailing List > > For list archives: http://lists.mysql.com/mysql > > To unsubscribe: > > http://lists.mysql.com/[EMAIL PROTECTED] > > > > > -- > Brent Baisley > Systems Architect > Landover Associates, Inc. > Search & Advisory Services for Advanced Technology Environments > p: 212.759.6400/800.759.0577 > > > -- > MySQL General Mailing List > For list archives: http://lists.mysql.com/mysql > To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]