Hi Shawn,
Thanks for helping me out again! I've cut and pasted the answers below your questions:
I would love to say how adequate your hardware should be but I have forgotten what hardware you have, sorry! Would you mind re-posting all of the specs for your DB server?
2 x 500Mhz CPUs 8GB Memory mysql-max-4.0.20-ibm-aix4.3.3.0-powerpc
Also could you post the results of :
EXPLAIN select * From t20040908 where uid=454
to make sure the query is using the `uid` key (it should be but it never hurts to check).
mysql> EXPLAIN select * From t20040909 where uid=454;
+-----------+------+---------------+------+---------+-------+------+-------------+
| table | type | possible_keys | key | key_len | ref | rows | Extra |
+-----------+------+---------------+------+---------+-------+------+-------------+
| t20040909 | ref | uid | uid | 4 | const | 4275 | Using where |
+-----------+------+---------------+------+---------+-------+------+-------------+
1 row in set (0.07 sec)
Could you also respond with the results of :
SHOW TABLE STATUS LIKE 't20040908'
mysql> SHOW TABLE STATUS LIKE 't20040909';
+-----------+--------+------------+----------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+---------------------+----------------+---------+
| Name | Type | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Create_options | Comment |
+-----------+--------+------------+----------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+---------------------+----------------+---------+
| t20040909 | MyISAM | Fixed | 25209614 | 32 | 806707648 | 137438953471 | 222495744 | 0 | NULL | 2004-09-09 14:24:41 | 2004-09-10 01:01:35 | 2004-09-09 14:25:40 | | |
+-----------+--------+------------+----------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+---------------------+----------------+---------+
1 row in set (0.01 sec)
Part of the time it takes to query any table is the time it takes to load the correct index in from the disk so... maybe? you hardware could be too slow for this volume of information? I just can't say with any degree of confidence one way or the other yet.
I did notice that this query returned in only about 11% of the time it took a similar query on the full dataset to return (6.87 seconds vs. 61 seconds a similar lookup on "internet_usage"). Better but not exactly linearly related to the reduction in the data size. I would think this is related to the fact that the time it takes to find records using a B-TREE index degrades roughly logarithmically. I thought about putting a HASH index on that column but those are only available to the MEMORY table type (bummer).
BTW, if you and I take this thread back into the list everyone else will also have a chance to review your hardware specs and throw in their two cents, too!
Shawn Green Database Administrator Unimin Corporation - Spruce Pine
"J S" <[EMAIL PROTECTED]> wrote on 09/09/2004 04:48:47 AM:
> Hi Shawn, > > I'm working at the moment on the new database! I had a quick question for > you. Here's my table structure: > > t20040908 | CREATE TABLE `t20040908` ( > `uid` int(10) unsigned NOT NULL default '0', > `time` timestamp(14) NOT NULL, > `ip` int(10) unsigned NOT NULL default '0', > `urlid` int(10) unsigned NOT NULL default '0', > `timetaken` smallint(5) unsigned default '0', > `cs_size` int(10) unsigned default '0', > `sc_size` int(10) unsigned default '0', > `method_ID` tinyint(3) unsigned NOT NULL default '0', > `action_ID` tinyint(3) unsigned NOT NULL default '0', > `virus_ID` tinyint(3) unsigned NOT NULL default '0', > `useragent_ID` smallint(5) unsigned NOT NULL default '0', > KEY `uid` (`uid`) > ) TYPE=MyISAM > > and it currently has 15263552 rows. > > I ran the command: > select * from t20040908 where uid=454; > > which came back with 8347 rows in set (6.87 sec) > > > Does 6.87 seconds seem rather slow to you ? It could be the box is just too > old and slow. But I just wanted to check in case there was anything else I > should check? > > Cheers, > > JS. > > >There are two ways that I can think of that will combine smaller tables > >into one longer (not wider) table: MERGE tables and UNION queries. A third > >option is to manually add rows from multiple source tables to a staging > >table using INSERT...SELECT... statements. > > > >Since we never know from day to day what you will need to query on, > >creating one big MERGE table would probably be as impractical as your > >existing setup. That leaves the other two options available to run reports > >against. > > > >Since your information comes in daily batches anyway, that makes not only > >logical sense, and business sense, but physical sense too! Remember a > >few emails ago you asked how to move files from one table to another? it's > >pretty simple if we are creating identical tables. Just script this into a > >loop that traverses all of the dates in your range: > > > >CREATE TABLE `20040401` LIKE internet_usage; > > > >INSERT 20040401 > >SELECT * from internet_usage > >WHERE time >= '20040401000000' and time <= '20040401235959'; > > > >DELETE FROM internet_usage > >WHERE time >= '20040401000000' and time <= '20040401235959'; > > > >OPTIMIZE NO_WRITE_TO_BINLOG TABLE internet_usage; > > > >You have to run OPTIMIZE TABLE in order to recover the empty space from > >the table created by the DELETE. This will take quite a while to in the > >beginning but it will get logarithmically faster as you clear out the > >table. (http://dev.mysql.com/doc/mysql/en/OPTIMIZE_TABLE.html) > > > >Like I said before, you can improve performance (and disk space usage) > >during this re-organization if you disable all logging (the common query > >log and the binlog) while all of this is going on. > > > >To get an extract of a user's activity that spans multiple days you could > >(post 4.1): > > > >create temporary table tmpUserActivity > >SELECT * > >FROM ( > > (SELECT * from 20040401 where uid = @userid) > > UNION > > (SELECT * from 20040402 where [EMAIL PROTECTED]) > > ... repeat for as many days as you need.... > >) > >Where time between @starttime and @endtime > > > >What makes this work is that MySQL recognizes the outermost () pair as > >something like a single-table UNION query (I think they refer to it in the > >docs as an "anonymous view"). Anyway if you can create UNION queries, you > >have that technique available. > > > >It's not going to be quick to break up that freakin' huge table of yours > >but I think you will be very happy with the performance when you do. The > >downside to that is that making some of your queries will now take more > >effort (mulitple insert ... select... statements to populate a temp table > >or some UNION queries). > > > >Shawn > > > >"J S" <[EMAIL PROTECTED]> wrote on 09/03/2004 10:48:19 AM: > > > > > Hi, > > > > > > >Thanks! (I got both of them but had to pick one to reply to...) 763 > > > >Million rows in your internet_usage table. Pretty impressive! > > > > > > Now you see why I have issues with space! The database holds all the log > > > > > information from the company's proxies since April so there's a lot in > > > there. > > > > > > > > > > >I think I understand that your `uid` column refers to the ID of the > > > >user_table. How long does it take to make an extract of just one user? > > > > > > Correct. > > > > > > mysql> select * from user_table where uid="u752359"; > > > +------+---------+ > > > | ID | uid | > > > +------+---------+ > > > | 3628 | u752359 | > > > +------+---------+ > > > 1 row in set (0.00 sec) > > > > > > > > > mysql> select * from internet_usage where uid=3628; > > > > > > 163006 rows in set (1 min 1.08 sec) > > > > > > mysql> > > > > > > >Are > > > >there other processing writing to this table while we are trying to > >read > > > >from it? Concurrency (table locking) could be coming to play here too. > > > > > > No, no ones using it apart from me. > > > > > > > > > > >CREATE TEMPORARY TABLE tmpUsage > > > >SELECT time > > > > ,ip > > > > ,action_ID > > > > ,urlid > > > > ,cs_size > > > > ,sc_size > > > > ,useragent_ID > > > >FROM internet_usage iu > > > >WHERE iu.uid= @uid > > > > > > > >One option to consider at this point may be to make hourly or daily > >tables > > > >then use a combination of UNION and INSERT queries to rebuild a > >person's > > > >activities over a range of hours or days into a temporary table. Then > >fill > > > >in the details based on that temporary table. > > > > > > The daily tables might be a good idea because currently I have a job > >which > > > processes the logs daily and updates the database. But I'm not sure how > >it > > > might all fit together? My database skills have much left to be desired! > > > > > > Thanks for your help. > > > > > > JS. > > > > > > > > > > >Shawn > > > > > > > >"J S" <[EMAIL PROTECTED]> wrote on 09/03/2004 08:35:37 AM: > > > > > > > > > Here it is: > > > > > > > > > > +---------------- > > > > > > > > > > > >+------------------------------------------------------------------------------------------------- > > > > > ----------------------------------------------------------------+ > > > > > | Table | Create Table > > > > > > > > > > > > > > > | internet_usage | CREATE TABLE `internet_usage` ( > > > > > `uid` int(10) unsigned NOT NULL default '0', > > > > > `time` timestamp(14) NOT NULL, > > > > > `ip` int(10) unsigned NOT NULL default '0', > > > > > `urlid` int(10) unsigned NOT NULL default '0', > > > > > `timetaken` smallint(5) unsigned default '0', > > > > > `cs_size` int(10) unsigned default '0', > > > > > `sc_size` int(10) unsigned default '0', > > > > > `method_ID` tinyint(3) unsigned NOT NULL default '0', > > > > > `action_ID` tinyint(3) unsigned NOT NULL default '0', > > > > > `virus_ID` tinyint(3) unsigned NOT NULL default '0', > > > > > `useragent_ID` smallint(5) unsigned NOT NULL default '0', > > > > > KEY `uid` (`uid`) > > > > > ) TYPE=MyISAM MAX_ROWS=4294967295 | > > > > > +---------------- > > > > > > > > > > > >+------------------------------------------------------------------------------------------------- > > > > > ----------------------------------------------------------------+ > > > > > 1 row in set (0.00 sec) > > > > > > > > > > > > > > > >I guess we call that good news and bad news, eh? Good news in that > >we > > > > > >found one big part of the bottleneck; Bad news in that we are back > >to > > > >the > > > > > >same issue (indexes) Can you show me the results of "SHOW CREATE > >TABLE > > > > > >internet_usage"? > > > > > > > > > > > >Shawn > > > > > > > > > > > >"J S" <[EMAIL PROTECTED]> wrote on 09/03/2004 06:15:14 AM: > > > > > > > > > > > > > Hi Shawn, > > > > > > > > > > > > > > I've been trying out the queries you suggested, and I think I've > > > > > >narrowed > > > > > > > down the problem. It seems to be in the first SELECT (see > >below). As > > > >you > > > > > >can > > > > > > > see these are quite slow, so maybe this is an index problem > >after > > > >all? > > > > > >The > > > > > > > other queries you gave me were very fast (let me know if you > >want to > > > >see > > > > > >the > > > > > > > results - I wasn't sure whether you'd appreciate a lengthy > >email!) > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > JS. > > > > > > > > > > > > > > mysql> CREATE TEMPORARY TABLE tmpActivity > > > > > > > -> SELECT time > > > > > > > -> ,ip > > > > > > > -> ,action_ID > > > > > > > -> ,urlid > > > > > > > -> ,cs_size > > > > > > > -> ,sc_size > > > > > > > -> ,useragent_ID > > > > > > > -> FROM internet_usage iu > > > > > > > -> WHERE iu.uid= @uid > > > > > > > -> AND iu.time>20040825000000; > > > > > > > Query OK, 905 rows affected (15.51 sec) > > > > > > > Records: 0 Duplicates: 905 Warnings: 0 > > > > > > > > > > > > > > mysql> CREATE TEMPORARY TABLE tmpActivity > > > > > > > -> SELECT time > > > > > > > -> ,ip > > > > > > > -> ,action_ID > > > > > > > -> ,urlid > > > > > > > -> ,cs_size > > > > > > > -> ,sc_size > > > > > > > -> ,useragent_ID > > > > > > > -> FROM internet_usage iu > > > > > > > -> WHERE iu.uid= @uid > > > > > > > -> AND iu.time>20040825000000; > > > > > > > Query OK, 1244 rows affected (46.78 sec) > > > > > > > Records: 0 Duplicates: 1244 Warnings: 0 > > > > > > > > > > > > > > >I think you are correct in saying that the # of JOINS is > >affecting > > > >your > > > > > > > >performance. Let's try this in stages so that each set (layer) > >of > > > >joins > > > > > >is > > > > > > > >smaller. > > > > > > > > > > > > > > > >Step 1: select everything you are going to need from > >internet_usage > > > >for > > > > > > > >both the query and the joins into a temporary table. This > >reduces > > > >the > > > > > > > >first table of the joins from several hundred thousand rows > >(maybe > > > >over > > > > > >a > > > > > > > >million by now) to just several hundred or at most a few > >thousand. > > > > > >This > > > > > > > >alone will save you 1000x + the CPU cycles it takes to run the > >main > > > > > >join. > > > > > > > > > > > > > > > >Step 2: Do the JOINS and create your results > > > > > > > > > > > > > > > >Here's how I think it may work: > > > > > > > > > > > > > > > >-- Step 1 > > > > > > > > > > > > > > > >SELECT @uid := id from user_table WHERE uid = 'u825407 '; > > > > > > > > > > > > > > > >CREATE TEMPORARY TABLE tmpActivity > > > > > > > >SELECT time > > > > > > > > , ip > > > > > > > > , action_ID > > > > > > > > , urlid > > > > > > > > , cs_size > > > > > > > > , sc_size > > > > > > > > , useragent_ID > > > > > > > >FROM internet_usage iu > > > > > > > >WHERE ut.uid= @uid > > > > > > > > AND iu.time > 20040825000000 > > > > > > > > AND iu.time < 20040901000000 > > > > > > > > > > > > > > > ># you want to test the whole process with and without this next > > > >step to > > > > > > > >see if creating the keys helps or hurts the total time > > > > > > > >ALTER TABLE tmpActivity add key(urlid), add key(action_id),add > > > > > > > >key(useragent_id); > > > > > > > > > > > > > > > >-- Step 2 > > > > > > > > > > > > > > > >SELECT > > > > > > > > DATE_FORMAT( iu.time, '%Y-%m-%d %H:%i' ) as time > > > > > > > > , INET_NTOA(iu.ip) as ip > > > > > > > > , RPAD(uac.action,20,' ') as action > > > > > > > > , uc.category as category > > > > > > > > , > > > > > > > > > >concat(usc.scheme,"://",us.server,up.path,if(uv.url_query_ID=1,"", > > > > > > > concat("?",uq.query))) > > > > > > > >as url > > > > > > > > , iu.cs_size as bytes_sent > > > > > > > > , iu.sc_size as bytes_received > > > > > > > > , ua.useragent as useragent > > > > > > > >FROM tmpActivity iu > > > > > > > >INNER JOIN url_visit uv > > > > > > > > ON iu.urlid=uv.urlid > > > > > > > > INNER JOIN url_servers us > > > > > > > > ON us.id=uv.url_server_ID > > > > > > > > INNER JOIN url_paths up > > > > > > > > ON up.id=uv.url_path_ID > > > > > > > > INNER JOIN url_queries uq > > > > > > > > ON uq.id=uv.url_query_ID > > > > > > > > INNER JOIN url_schemes usc > > > > > > > > ON usc.id=uv.url_scheme_ID > > > > > > > > INNER JOIN url_categories uc > > > > > > > > ON uc.id=uv.url_category_ID > > > > > > > >INNER JOIN user_agents ua > > > > > > > > ON iu.useragent_ID=ua.ID > > > > > > > >INNER JOIN url_actions uac > > > > > > > > ON iu.action_ID=uac.ID > > > > > > > >******************* > > > > > > > > > > > > > > > >In fact, you may think about breaking this down into another > >step. > > > >This > > > > > > > >time collect only the columns you need for the UV joins. > > > > > > > > > > > > > > > >******************* > > > > > > > >-- STEP 2A > > > > > > > > > > > > > > > >CREATE TEMPORARY TABLE tmpActivity2 > > > > > > > >SELECT time > > > > > > > > , ip > > > > > > > > , RPAD(uac.action,20,' ') as action > > > > > > > > , urlid > > > > > > > > , cs_size > > > > > > > > , sc_size > > > > > > > > , ua.useragent as useragent > > > > > > > > , uv.url_server_ID > > > > > > > > , uv.url_path_ID > > > > > > > > , uv.url_query_ID > > > > > > > > , uv.url_scheme_ID > > > > > > > > , uv.url_category_ID > > > > > > > >FROM tmpActivity iu > > > > > > > >INNER JOIN url_visit uv > > > > > > > > ON iu.urlid=uv.urlid > > > > > > > >INNER JOIN user_agents ua > > > > > > > > ON iu.useragent_ID=ua.ID > > > > > > > >INNER JOIN url_actions uac > > > > > > > > ON iu.action_ID=uac.ID > > > > > > > > > > > > > > > >DROP TABLE tmpActivity; > > > > > > > > > > > > > > > >#consider not creating these keys, too, depending on > >performance > > > >gain > > > > > >vs. > > > > > > > >time lost. > > > > > > > >ALTER TABLE tmpActivity add key(url_server_ID), > >key(url_path_ID), > > > > > > > >key(url_query_ID), key(url_scheme_ID), key(url_category_ID); > > > > > > > > > > > > > > > >-- STEP2B > > > > > > > > > > > > > > > >SELECT DATE_FORMAT( iu.time, '%Y-%m-%d %H:%i' ) as time > > > > > > > > , INET_NTOA(iu.ip) as ip > > > > > > > > , iu.action > > > > > > > > , uc.category as category > > > > > > > > , > > > > > > > > > >concat(usc.scheme,"://",us.server,up.path,if(uv.url_query_ID=1,"", > > > > > > > concat("?",uq.query))) > > > > > > > >as url > > > > > > > > , iu.cs_size as bytes_sent > > > > > > > > , iu.sc_size as bytes_received > > > > > > > > , iu.useragent > > > > > > > >FROM tmpActivity2 iu > > > > > > > >INNER JOIN url_servers us > > > > > > > > ON us.id=iu.url_server_ID > > > > > > > >INNER JOIN url_paths up > > > > > > > > ON up.id=iu.url_path_ID > > > > > > > >INNER JOIN url_queries uq > > > > > > > > ON uq.id=iu.url_query_ID > > > > > > > >INNER JOIN url_schemes usc > > > > > > > > ON usc.id=iu.url_scheme_ID > > > > > > > >INNER JOIN url_categories uc > > > > > > > > ON uc.id=iu.url_category_ID > > > > > > > > > > > > > > > >DROP TABLE tmpActivity2; > > > > > > > > > > > > > > > >Yes, It's a _lot_ longer than your original query. However, I > >think > > > > > >that > > > > > > > >because we are not MULTIPLYING the size of all tables involved > >by > > > >the > > > > > >size > > > > > > > >of all of the other tables involved in the JOINS of your > >original > > > > > > > >statement, that you will get a HUGE performance increase. At > >each > > > > > >stage I > > > > > > > >reduced the # of records you needed to join into to the next > >layer > > > >of > > > > > > > >tables by at least an order of magnitude (maybe 3 or more). > >That > > > >should > > > > > > > >seriously improve your performance. You should notice that I > >was > > > >"cut > > > > > >and > > > > > > > >paste lazy" and in each new step I aliased the table from the > > > >previous > > > > > > > >step as "iu". That's because I was trying to work fast and > >didn't > > > >want > > > > > >to > > > > > > > >introduce any more wierdness (typos) than I really needed to. > > > > > > > > > > > > > > > >Let me know if this helps, please? > > > > > > > > > > > > > > > >Shawn Green > > > > > > > >Database Administrator > > > > > > > >Unimin Corporation - Spruce Pine > > > > > > > > > > > > > > > > > > > > > > > >"J S" <[EMAIL PROTECTED]> wrote on 09/02/2004 11:34:11 AM: > > > > > > > > > > > > > > > > > Thanks a lot Shawn. I'm still trying to fix the error 27 > > > >problem!! > > > > > > > > > Unfortunately I couldn't find another host with the space on > >it > > > >to > > > > > >run a > > > > > > > > > > > > > > > > > back up of my database. > > > > > > > > > > > > > > > > > > All I'm really trying to do is speed up the query below > >which > > > >gets > > > > > >the > > > > > > > >rows > > > > > > > > > corresponding to a user's internet usage for a designated > >period > > > >of > > > > > > > >time. > > > > > > > > > > > > > > > > > > At the moment I have an index on column uid in table > > > >internet_usage > > > > > >so > > > > > > > >when > > > > > > > > > I run the query below it presumably gets all the rows for > >that > > > >user > > > > > > > >quite > > > > > > > > > quickly. However if I only want the last 5 days, it still > >takes > > > > > >quite a > > > > > > > >long > > > > > > > > > time to get 100 rows or so. > > > > > > > > > > > > > > > > > > That's why I was trying to create an index on uid,time but > >now > > > >I'm > > > > > >not > > > > > > > >so > > > > > > > > > sure that would speed the query up. I think the joins maybe > >the > > > > > >cause of > > > > > > > >the > > > > > > > > > slowdown? > > > > > > > > > > > > > > > > > > > > > > > > > > > SELECT DATE_FORMAT( iu.time, '%Y-%m-%d %H:%i' ) as time, > > > > > > > >INET_NTOA(iu.ip) as > > > > > > > > > ip, RPAD(uac.action,20,' ') as action, uc.category as > >category, > > > > > > > > > concat(usc.scheme,"://",us.server,up.path, > > > > > > > > > if(uv.url_query_ID=1,"",concat("?",uq.query))) as url, > > > >iu.cs_size as > > > > > > > > > bytes_sent,iu.sc_size as bytes_received,ua.useragent as > > > >useragent > > > > > > > > > FROM internet_usage iu > > > > > > > > > INNER JOIN url_visit uv > > > > > > > > > ON iu.urlid=uv.urlid > > > > > > > > > INNER JOIN url_servers us > > > > > > > > > ON us.id=uv.url_server_ID > > > > > > > > > INNER JOIN url_paths up > > > > > > > > > ON up.id=uv.url_path_ID > > > > > > > > > INNER JOIN url_queries uq > > > > > > > > > ON uq.id=uv.url_query_ID > > > > > > > > > INNER JOIN url_schemes usc > > > > > > > > > ON usc.id=uv.url_scheme_ID > > > > > > > > > INNER JOIN url_categories uc > > > > > > > > > ON uc.id=uv.url_category_ID > > > > > > > > > INNER JOIN user_agents ua > > > > > > > > > ON iu.useragent_ID=ua.ID > > > > > > > > > INNER JOIN url_actions uac > > > > > > > > > ON iu.action_ID=uac.ID > > > > > > > > > INNER JOIN user_table ut > > > > > > > > > ON ut.id=iu.uid > > > > > > > > > WHERE ut.uid="u825407 " AND iu.time > 20040825000000 AND > > > >iu.time< > > > > > > > > > 20040901000000 ORDER BY iu.time > > > > > > > > > > > > > > > > > > JS. > > > > > > > > > > > > > > > > > > > >OUCH! I hate running out of room. > > > > > > > > > > > > > > > > > > > >IF you are using InnoDB tables your are S-O-L for getting > >your > > > >disk > > > > > > > >space > > > > > > > > > >back without a full export/delete/re-create cycle (a > >feature > > > > > >request > > > > > > > > > >already exists requesting that OPTIMIZE TABLE be expanded > >to > > > >work > > > > > >with > > > > > > > > > >InnoDB, too). > > > > > > > > > > > > > > > > > > > >Copying data from one table to another won't help unless > >the > > > >tables > > > > > >are > > > > > > > >on > > > > > > > > > >different physical devices. If they aren't you might as > >well be > > > > > > > > > >re-arranging the deck chairs on the Titanic. Still you want > >to > > > > > >OPTIMIZE > > > > > > > > > >whenever you can (once per copy/delete batch) during this > > > >process > > > > > >to > > > > > > > >keep > > > > > > > > > >your DBs as small as possible. You may halt your query > >logging > > > >and > > > > > >your > > > > > > > > > >binlogging so that your inserts commands and data are not > > > > > >duplicated to > > > > > > > > > >disk as well. Turn them back on when your transfer is > >complete. > > > >I > > > > > >think > > > > > > > >I > > > > > > > > > >remember reading that you can turn off transaction logging, > > > >too. I > > > > > >have > > > > > > > > > >never needed to do that so I can't tell you where to find > >the > > > > > >option or > > > > > > > > > >even what it's called (sorry!) > > > > > > > > > > > > > > > > > > > >Best of luck! > > > > > > > > > > > > > > > > > > > >Shawn Green > > > > > > > > > >Database Administrator > > > > > > > > > >Unimin Corporation - Spruce Pine > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >"J S" <[EMAIL PROTECTED]> wrote on 09/02/2004 08:38:30 > >AM: > > > > > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > > > > > I want to copy data from one table to another. However I > > > >need to > > > > > >do > > > > > > > >this > > > > > > > > > > > > > > > > > > > > > carefully because I haven't got much of space left. I > >was > > > > > >thinking > > > > > > > >of > > > > > > > > > >maybe > > > > > > > > > > > selecting data from every 100000 rows of the old table, > > > > > >inserting it > > > > > > > > > >into > > > > > > > > > > > the new table, then deleting those rows from the old > >table. > > > > > > > > > > > > > > > > > > > > > > Could someone help me out with the SQL for this please? > >Or > > > >tell > > > > > >me > > > > > > > >if > > > > > > > > > > > there's a better way of doing this? > > > > > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > > > > > > > > > JS. > > > > > > > > > > > > > > > > > > > > > > > > > > > >_________________________________________________________________ > > > > > > > > > > > Want to block unwanted pop-ups? Download the free MSN > > > >Toolbar > > > > > >now! > > > > > > > > > > > http://toolbar.msn.co.uk/ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > > > > MySQL General Mailing List > > > > > > > > > > > For list archives: http://lists.mysql.com/mysql > > > > > > > > > > > To unsubscribe: > > > > > >http://lists.mysql.com/[EMAIL PROTECTED] > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >_________________________________________________________________ > > > > > > > > > Use MSN Messenger to send music and pics to your friends > > > > > > > > > http://www.msn.co.uk/messenger > > > > > > > > > > > > > > > > > > > > > > > > >_________________________________________________________________ > > > > > > > Want to block unwanted pop-ups? Download the free MSN Toolbar > >now! > > > > > > > http://toolbar.msn.co.uk/ > > > > > > > > > > > > > > > > > _________________________________________________________________ > > > > > It's fast, it's easy and it's free. Get MSN Messenger today! > > > > > http://www.msn.co.uk/messenger > > > > > > > > > > > _________________________________________________________________ > > > Want to block unwanted pop-ups? Download the free MSN Toolbar now! > > > http://toolbar.msn.co.uk/ > > > > > _________________________________________________________________ > Use MSN Messenger to send music and pics to your friends > http://www.msn.co.uk/messenger >
_________________________________________________________________
It's fast, it's easy and it's free. Get MSN Messenger today! http://www.msn.co.uk/messenger
-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]