On 24-Mar-2003 Stefan Toobe wrote:
> Hi all,
>
> does anybody know how to use mysqlump with "big tables"?
>
> I�ve got a db with a few tables. One of that tables has > 100000 rows.
> Making a dump with mysqldump 3.23.54 causes following error:
>
> mysqldump: Got error: 1104: The SELECT would examine too many records
> and probably take a very long time. Check your WHERE and use SET OPTION
> SQL_BIG_SELECTS=1 if the SELECT is ok when retrieving data from server
>
> the dumpfile is written and contains all tables except of that one "big
> table". The commandline I�m using is
>
> mysqldump --host=localhost --user=abc --password='xyz' DBNAME > dump.sql
>
> I tried several options like
>
> mysqldump [..] --set-variable SQL_BIG_SELECTS=1 > dump.sql
> mysqldump [..] --set-variable='SQL_BIG_SELECTS=1' > dump.sql
> mysqldump [..] --set-variable='SET OPTION SQL_BIG_SELECTS=1' > dump.sql
> mysqldump [..] -O 'SET OPTION SQL_BIG_SELECTS=1' > dump.sql
> mysqldump [..] --set-variable SET OPTION SQL_BIG_SELECTS=1 > dump.sql
> and others
>
> but nothing of that will write the complete dump. It seems to be very
> tricky. I cannot believe that mysqldump isnt able to dump big tables :-)
> Has anybody a hint? Thanks!!!
>
Check/Increase 'max_join_size'
mysql> show variables like '%join%';
+------------------+------------+
| Variable_name | Value |
+------------------+------------+
| join_buffer_size | 8384512 |
| max_join_size | 4294967295 |
+------------------+------------+
2 rows in set (0.00 sec)
Regards,
--
Don Read [EMAIL PROTECTED]
-- It's always darkest before the dawn. So if you are going to
steal the neighbor's newspaper, that's the time to do it.
(53kr33t w0rdz: sql table query)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]