----- Original Message -----
> In the last episode (Jun 15), SDiZ Cheng said:
> > When i use mysqldump,
> > seems that MySQL will open all the tables in the database.
> > But, what i have is: a database will over 10,000 tables ..
> > I get a "Too much opened file" error on this..
> > I think increasing open_file_limit is not possible,
> > because of the limit of the OS itself..
>
> Which OS are you running?  You should be able to open 20000 files
> (index+data) on most Unixes with a little tuning.

I am using linux. yes, i can do they by a simple
  "echo 4294967295 > /proc/sys/fs/file-max "
But i don't want to see my system out of memory.
as the kernel doc said, it won't free the memory once
it use as a filehandle.. although those filehandle may
be reuse, but, the problem is: i won't do that twice .

> It may be slow, but  it should work.  Alternatively, you can temporarily
move tables 1000 at
> a time into another directory, mysqldump them, and move them back.

thanks for your suggest.. i will try it =)


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to