Good day,
        We're having difficulties with a database table of one varchar
column that has 130,000,000 records.  It is around 3.3gbs.

We want to select unique results from the table and have been using a
variety of commands to attempt to do so (ideally we'll insert results into a
table but I started trying a file to see if it would make a different)
These are the last few commands I tried:

insert into unique_table (field) select distinct field from table ;
insert into unique_tablel (field) select SQL_BIG_RESULT distinct field from
table ;
select SQL_BIG_RESULT distinct field from table into outfile
'/home/fred/output/outfile';

Without SQL_BIG_RESULT it takes around 20 hours to fill all the free space
on the hard drive (25GBs!) with it's temp file without ever writing to the
table or to the  outfile.  With SQL_BIG_RESULT it takes it under an hour to
fill the hard drive.
There are no primary keys in any of these tables.

Any ideas what the problem could be or how to resolve this?

This is on a RedHat Linux 8.0 server with the 3.23.52-3 rpms.  This does
work on postgresql on this server, though it takes around 4-5 hours and uses
somewhere between 11-19gbs of space for temp work.  With a  different drive
this server also boots into windows2000 with MSSQL it takes it 38 minutes to
complete.  So it really seems like there must be something odd that's
causing a problem.

Thanks in advance for any assistance offered,
        Bryan


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to